To access Making dQMs Work—An Overview of Operating Models, you must be logged in and be a member of the Digital Measurement Community. Not yet a member? Join for free today.
With converging standards and test measures rolling out, dQMs are coming into focus. The benefits are clear—but to truly get the best results and make measure operations more efficient, additional changes are required.
NCQA has started using the term “digital quality ecosystem,” which is an accurate description of the broader environment in which dQMs operate. That ecosystem i nclu des data sources, data aggregators, systems and technologies, in-house departments and vendors. How all these components fit together in a future where quality measurement is completely digital is through operating models.
At the highest level, we can think of operating models as two “flows” or processes.
1. Data Flow: How data flows from many different sources into a digital quality reporting system/platform and how quality metrics are reported from that platform.
2. Validation Chain: How data is validated/audited to ensure accuracy and that it is attributable to a valid source (data provenance).
If we think of a diagram with data sources on the left and the reporting system on the right, the data flow is left to right. Conversely, the validation flow is right to left (tracking data “back” to the source). With standards and simplification programs (e.g., the NCQA DAV program, currently in a trial phase with New York HIEs) advancing over time, both flows will become more uniform and simplified, reducing custom efforts and the need for primary source validation (PSV).
Equipped with this scaffolding, we can then look at existing and future data flows and determine where and how to leverage the best, most direct ways to capture data and validate it. Considerations in that effort should include:
· Transformations: Where and how is data transformed mapped, substituted, supplemented, summarized and/or consolidated)?
· Persistence: Where is data persisted? Is persistence necessary? What operational and security implications does persistence bring with it?
· Aggregation: How can we access data from many sources at one point? Ideally, the aggregator can also certify the data’s validity (which the DAV program does for HIEs).
· Other Use Cases: Chances are that the data collected also has uses for other programs and departments beyond quality reporting: care management/population health, care coordination, gap closure, risk adjustment and more. Considering the needs of these programs and designing data flows to service all use cases, when possible, will yield significant operational improvement while simplifying data operations.
Although this is a high-level approach, it might be useful in illuminating how we look at optimizing digital quality reporting, reusing operating models and developing and refining best practices.
· What components do you think are required in a complete operating model?
· What areas are you most concerned about with regard to transitioning to an “all-dQM” future?
· Have you already done work along these lines? What were your results? What did you learn?
Michael Klotz, Healthcare IT Entrepreneur, MK Advisory Services
Michael is a Healthcare IT entrepreneur, consultant and expert in the flow of data between patients, providers and payers (the healthcare data ecosystem), healthcare interoperability, digital quality and digital prescriptions. He applies his understanding of technology and standards, including HL7 FHIR, the regulatory environment, NCQA’s quality measures, the emerging digital quality standards (QDM-CQL, FHIR-CQL) and NCPDP standards to simplifying and automating secure data exchanges between patients, providers and payers.
Michael has built three successful companies, brought the first SaaS platforms for Medical Records Review to market and has been delivering strategic solutions for 120+ consulting clients for over 20 years.