Documentation

Technical reference for metrics, definitions, and methodology

Glossary

DMS (Data Management System)

Alteridad's orchestration platform that turns operational data into living rules. Combines analytics, rule enforcement, and process mapping in one continuous loop.

Data Quality Dimensions

Industry-standard categories for measuring data fitness:

  • Completeness: % of required fields that are filled
  • Validity: % of values matching expected type, format, or domain
  • Uniqueness: % of records without duplicates
  • Consistency: % of records with no cross-system conflicts
  • Accuracy: % of values matching a trusted reference source
  • Conformity: % of values using standard codes or units

Rule Regime

A structured set of executable business rules that validate and correct data. Rules codify implicit policies (e.g., "Invoice amounts must match line items") into deterministic constraints with automatic enforcement.

Process Mining

Technique for discovering, monitoring, and improving real operational processes by extracting knowledge from event logs. Reveals actual workflows vs. intended workflows, identifies bottlenecks, and measures conformance.

Data Observability

The practice of monitoring data pipelines and systems for freshness, volume anomalies, schema drift, and lineage. Ensures data reliability before it reaches analytics or operations.

HITL (Human-in-the-Loop)

AI-assisted approach where humans approve or adjust AI suggestions. DMS uses HITL for rule creation: AI suggests rules based on patterns, domain experts validate and refine them.

SKU (Stock Keeping Unit)

Unique identifier for a distinct product or service in inventory management. Critical for retail data quality - duplicate or malformed SKUs cause inventory errors, fulfillment delays, and revenue leakage.

Metrics Reference

DMS tracks 35 industry-standard metrics across 5 categories. All metrics are adapted from established frameworks (see Sources).

Data Quality (7 metrics)

Core dimensions that measure whether data is fit for operational use.

  • 1.
    Completeness: % of required fields filled.Critical for inventory (SKU, price, stock level) and CRM (email, phone).
  • 2.
    Validity: % of values matching type, format, or domain.E.g., email addresses must match RFC 5322, dates must be valid ISO 8601.
  • 3.
    Uniqueness: % of records without duplicates.Prevents double-billing, duplicate inventory, and CRM clutter.
  • 4.
    Consistency: % of records with no cross-table conflicts.E.g., customer address in CRM must match invoice address in ERP.
  • 5.
    Accuracy: % of values matching a trusted reference.Compare product weights to manufacturer specs, addresses to postal databases.
  • 6.
    Conformity: % of values using standard codes or units.E.g., use ISO 4217 for currencies, ISO 3166 for country codes.
  • 7.
    Referential Integrity: Orphan foreign keys per 1,000 rows.Invoice line items must reference valid products, orders must reference valid customers.

Rule Engine & Governance (9 metrics)

Metrics that track the effectiveness and efficiency of rule enforcement.

  • 1.
    Rule Coverage: % of critical fields with at least one active rule.
  • 2.
    Rule Pass Rate: % of rows passing all active rules.
  • 3.
    Issue Backlog: Count of open violations.
  • 4.
    Fix Throughput: Violations resolved per week.
  • 5.
    Auto-Fix Rate: % of violations auto-remediated without human intervention.
  • 6.
    Alert Precision: % of alerts that are true issues (not false positives).
  • 7.
    MTTD (Mean Time to Detect): Average time from violation occurrence to detection.
  • 8.
    MTTR (Mean Time to Resolve): Average time from detection to resolution.
  • 9.
    Review SLA Adherence: % of reviews completed on time.

Process Mining (7 metrics)

Operational workflow metrics extracted from event logs.

  • 1.
    Case Throughput Time: Start-to-finish duration for a process (e.g., quote-to-cash).
  • 2.
    Waiting Time: Idle time between process steps.
  • 3.
    Variant Count: Number of unique paths through the process.
  • 4.
    Conformance Rate: % of cases matching the target process model.
  • 5.
    Rework Rate: % of cases repeating a step due to errors.
  • 6.
    Straight-Through Processing Rate: % of cases with no manual intervention.
  • 7.
    SLA Breach Rate: % of cases exceeding SLA.

Data Observability & Pipelines (5 metrics)

Infrastructure health metrics for data pipelines.

  • 1.
    Freshness Lag: Time from source event to data warehouse availability.
  • 2.
    Volume Anomaly Rate: % of pipeline runs with unexpected row counts.
  • 3.
    Schema Drift Incidents: Changes to table structure detected per month.
  • 4.
    Lineage Coverage: % of datasets with lineage captured (source → transformation → destination).
  • 5.
    Pipeline Success Rate: % of successful pipeline runs.

Adoption & Coverage (7 metrics)

DMS-specific metrics that track platform usage and impact.

  • 1.
    Monitored Fields: Count of fields under active rules.
  • 2.
    Table Coverage: % of critical tables onboarded to DMS.
  • 3.
    Rule Execution Volume: Rule evaluations per day.
  • 4.
    Issues Resolved: Fixes closed per week.
  • 5.
    Data Doc Completeness: % of items with master documentation.
  • 6.
    Active Users: Monthly active DMS users (rule authors, reviewers, analysts).
  • 7.
    Loop Completions: Full cycles from Analytics → Rules → Process Map per month.

Methodology

1. Pilot Structure (4-6 weeks)

  • Week 1:Discovery. Identify 1-2 high-impact operational areas (e.g., invoice validation, inventory completeness). Establish baseline metrics.
  • Week 2:Data integration. Connect to ERP/CRM systems via API or database snapshot. Begin analytics (volume analysis, pattern detection).
  • Week 3-4:Rule creation. AI suggests rules based on patterns, domain experts validate. Deploy rules with automatic alerts.
  • Week 5-6:Process mapping and optimization. Visualize operational flows, identify bottlenecks, implement self-healing logic. Track improvement vs. baseline.

2. Metrics Selection

We don't track all 35 metrics at once. Pilot metrics are selected based on:

  • Financial impact: Which data issues cost the most? (DSO, denial rates, returns)
  • Feasibility: Can we measure it with available data?
  • Actionability: Can we fix issues once detected?

3. Human-in-the-Loop (HITL) Approach

DMS uses AI to accelerate rule creation, but humans always validate:

  • 1.AI analyzes historical data and suggests rules (e.g., "SKU must be unique per country").
  • 2.Domain expert reviews and refines the rule (adds exceptions, adjusts thresholds).
  • 3.Rule is deployed with deterministic enforcement (no probabilistic AI in production).

4. Success Criteria

A pilot is successful if we achieve measurable improvement in at least one operational metric:

  • Data quality: +10% completeness, -50% duplicates, etc.
  • Process efficiency: -20% case throughput time, -30% rework rate, etc.
  • Financial: -10% DSO, -15% invoice denial rate, -$X monthly returns, etc.

Sources & Standards

DMS metrics are adapted from industry-leading frameworks and tools. We don't invent metrics - we implement proven standards.

IBM — Data Quality Dimensions →

Foundational framework for completeness, validity, uniqueness, consistency, accuracy, and conformity.

DAMA (Data Management Association) →

DAMA-DMBOK (Data Management Body of Knowledge) for data governance and rule management practices.

Celonis — Process Mining →

Industry-leading process mining platform. Metrics for throughput time, conformance rate, and variant analysis.

UiPath — Process Mining →

Automation and process intelligence. Metrics for rework rate, straight-through processing, and SLA adherence.

Monte Carlo Data — Data Observability →

Pioneer in data observability. Metrics for freshness lag, volume anomalies, and schema drift.

DQOps — Data Quality Operations →

Open-source data quality platform. Best practices for rule coverage, pass rates, and alert precision.

Google — Rules of Machine Learning →

Best practices for MTTD (Mean Time to Detect) and MTTR (Mean Time to Resolve) in data systems.

DMS-Specific Metrics

Some metrics are unique to DMS capabilities (e.g., "Loop Completions", "Rule Execution Volume"). These track platform adoption and the compounding effect of the continuous loop.

Questions?

Need clarification on a metric or methodology?

Get in touch →