I’m Michael Klotz, a health care IT entrepreneur and consultan. You may have seen one of my blogs here on the Digital Measurement Community site, or maybe you’ve seen a post or video elsewhere.
Today, I want to talk about the dramatic changes underway in the Quality Data Ecosystem—and when I say “dramatic,” I definitely mean in a good way.
Let’s start by defining “Quality Data Ecosystem.” It essentially includes all stakeholders, data, policies and standards, as well as systems and processes, pertaining to quality measurement. If we narrow that down to quality measures that concern health plans, we’re left with the clinical and outcomes data that flows from providers and patients to the health plan.
Looking at those data flows through the lens of implementation, we’re talking about two major, rapidly converging concepts:
1. Digital Quality Measures (dQMs): Systems, artifacts and standards that specifically have to do with “measurement”; in other words, data definitions, rules and reporting and…
2. Interoperability: Everything that has to do with how clinical data is accessed and how it flows as input for digital measures.
To see how the Clinical Data Ecosystem is evolving, let’s first look at the current state and then compare that to the future state.
I should mention here that we also refer to these diagrams as “operating models” because they create the high-level framework for measure data operations. Each operating model has two basic components:
- The Data Flow (left to right): How data flows from different sources into a digital quality reporting system or platform, and how that platform reports quality metrics.
- The Validation Chain (right to left): How data is validated/audited to ensure accuracy and attribute it to a valid source—this is called “data provenance.”
A generic operating model for the current Quality Data Ecosystem would look something like what you see in the diagram:
Claims data is the foundation for most measures and measure sets—HEDIS, PQA, CMS and state Medicaid measures. And there’s chart review and an ever-increasing amount of ancillary data (also known as “supplemental data”), which is extremely fragmented, difficult to capture and requires laborious validation steps. I’m sure most of us are very familiar with this model!
The operating models we are developing for the digital future of quality look different.
Structured clinical data is at the center. Claims still matter, at least for a while. Ancillary or supplemental data will be replaced with standardized clinical data, which means validation will be simplified and—eventually—become automated. Aggregators like HIEs and registries, as well as direct access via APIs or real-time interfaces, will simplify and improve measure operations dramatically.
I realized I have used that term "aggregator" before. Let’s look at an example:
HIEs, and more specifically, the Data Aggregator Validation—or DAV—program is currently in a test phase with three HIEs in New York state. The DAV not only standardizes data access, it also eliminates dozens of individual supplemental data feeds and can significantly reduce chart reviews. And it greatly streamlines source validation efforts.
With NCQA’s DAV program, the HIE pulls data from many health systems in a region or state and certifies its HEDIS data feed with NCQA. Only Once. That’s the kind of operational improvement we need more of! I think we can call that dramatic!
Ok. That’s an overview of how the quality data ecosystem is evolving.