NCQA Strategy to Reduce EHR Burden Comments

NCQA urges ONC to transfer eMeasure certification testing to our more robust methodology.

January 28, 2019

Don Rucker, M.D.
National Coordinator for Health Information Technology
200 Independence Ave. S.W.
Washington, DC 20201

Dear Dr. Rucker:

Thank you for the opportunity to comment on your draft strategy for reducing health information technology (IT) and electronic health record (EHR) burden. The National Committee for Quality Assurance (NCQA) supports the draft strategy, which accurately chronicles lessons learned from the Physician Quality Reporting System (PQRS), Meaningful Use, and HEDIS programs that served as essential bridges to value-based payment models.

However, the final strategy also must focus on improving electronic clinical quality measure (eCQM) results’ accuracy, validity and reliability. There is now wide variation in quality measure results’ accuracy and reliability, which could undermine confidence in value-based payments because they unfairly skew measurement results that determine those payments.

NCQA’s eMeasure testing methodology, which your Office of the National Coordinator (ONC) has approved for the EHR technology certification program, is much more robust than ONC’s Project Cypress methodology. We therefore urge you to transfer eMeasure certification testing to NCQA, as you did in making the National Council for Prescription Drug Programs the national standard testers for e-Prescribing.

Detailed comments on these and other issues in the draft strategy are below.

Reducing Burden and Adding Value

As stewards of HEDIS®, the most widely-used and respected clinical quality measures, NCQA is acutely aware of the urgent need to reduce reporting burden. We are diligently working on many steps called for in the draft through our Data and Measures Roadmap initiative. The Roadmap’s goal is to make quality measurement and reporting as effortless as possible by getting data from what physicians and other clinicians document in the normal course of patient care.

EHRs and electronic clinical data repositories, like registries and health information exchanges, provide a much more robust picture of quality than what we can get exclusively from claims or EHR-based reporting.

This data-inclusive approach also addresses workflow concerns as it reduces the necessity for additional clinician documentation solely for the purpose of quality measurement. It is encouraging to see that our efforts to reduce measurement burden so closely align with the well-informed recommendations in your draft strategy.

As our Roadmap leads us to the next generation of HEDIS, it will let us make several additional improvements. It will:

  • Align NCQA measure specifications across the health care system to allow comparisons across practices, hospitals, clinically integrated networks, health plans and other settings.
  • Provide greater flexibility by allowing appropriate adjustments to some specifications that assist specific state, local or system needs in pursuit of quality improvement.
  • Improve accuracy and validity while reducing costs to implement and update measures by letting vendors and health systems digitally download specifications from the cloud, reducing manual data entry and relying more on data already in electronic systems.
  • Enhance the value for patients, clinicians, payers, plans, electronic system vendors and government by reducing costs and efforts while allowing more meaningful measurement.
  • Build the digital foundation for designing future outcome-based measures by leveraging access to the rich clinical data in electronic sources other than claims.

We released six of the new digital measures in 2018 and want to convert more than 50 existing measures over the next three years into standardized “digital measure packages.” Our goal is to provide a glide path that still allows traditional reporting for entities not ready to immediately adopt the new format.

How the Technology Works: The digital measure packages contain human and machine-readable versions of specifications for the data using international clinical data standards deployed in the Centers for Medicare & Medicaid (CMS) eMeasure program. The standards that form the core of these measure packages are:

  • Quality Data Model (QDM), which defines quality measure elements in a standardized way.
  • Fast Healthcare Interoperability Resources (FIHR) specification, which simplify data exchange without sacrificing integrity.
  • Clinical Quality Language (CQL), which is standard logic for tying measure data elements together to produce scores.

These standards provide a template for conducting highly efficient, reliable measure queries against health systems’ data to generate results. The CQL logic does much of the measure calculation “heavy lift” in a highly standard fashion, removing the need for end users to perform any calculation function or attestation.

This is a sharp departure from current practice of each site creating their own coded interpretation from text descriptions. It reduces the need to check boxes for measure requirements and eliminates burdensome manual record review.

How it Provides Greater Flexibility: Electronic measurement provides greater flexibility by allowing pre-defined adjustments to some measure specifications to better meet needs of payers and other measurement stakeholders. The next generation of HEDIS introduces “allowable adjustments” that let entities like states or health systems focus measurement without changing core clinical intent. NCQA understands that stakeholders sometimes adjust specifications to address specific concerns like performance gap analyses and internal quality improvement.

Adjusted measures’ results are not comparable to those generated with published specifications and used for national reporting and benchmarks. However, they provide the ability to target specific populations and time frames and apply HEDIS measures for population health and other quality initiatives. Working within the allowed adjustments, stakeholders can:

  • Adjust the parameters of the populations measured to look at patients by provider group or geographic area.
  • Change the measurement period to focus on part of it for gap analysis.
  • Adjust age stratifications and time ranges, etc.

Challenges Ahead: As with any pioneering effort, NCQA’s Roadmap faces many daunting challenges:

  • Data in electronic sources come in both structured and unstructured formats. We need correct data in the most efficient, economical, and verifiable ways possible for measurement, quality improvement, and benchmarks. Moving to more uniform implementation of national electronic data and interoperability standards and use of technology like natural language processing are some ways we hope to improve this disparity.
  • Another challenge includes reconfiguring infrastructure throughout the health care system for electronic reporting. We know that quality reporting, solely from EHRs or claims, is not as robust as having a data aggregator or intermediary match, verify and deduplicate data from across the disparate sources. The latter provides a much more robust picture of patients’ experiences and a more accurate evaluation of provider performance. Although the most advanced plans, systems and practices are ready for all-electronic reporting, others need more time, and we will work to accommodate entities as they transition to full electronic reporting.
  • Other challenges include getting access to data from all pertinent sources, some of which now block data sharing for proprietary and legal reasons. Collaboration from all stakeholders is necessary to accelerate this revolutionary advancement.

There is much to learn as we strive to achieve these ambitious goals. NCQA is committed to working with all stakeholders to meet these challenges so we reduce the burden of measure reporting while improving the value of the information we use for measurement.

Improving Accuracy

Although the draft highlights strategies to reduce burden, the final strategy also needs to identify ways to ensure the veracity of measurement data. As noted in the draft, accuracy of eCQM calculation varies widely. Improving eCQM accuracy is essential to alleviate the burden of inaccurate results that skew and undermine confidence in value-based payments determined by measurement. All parties must trust metrics used for value-based payments, particularly patients and providers.

We are grateful that your office approved NCQA’s eMeasure testing methodology for your EHR Technology certification program. Our methodology validates whether a vendor’s system can select the correct records for the measure specification and report them in the standard format as specified by ONC and CMS.

Our methodology is much more robust than ONC’s Project Cypress methodology:

  • We include hundreds more test cases per measure (1000 to 1500); Project Cypress may include as few as 10 to upwards of 100.We require that a target system correctly select all relevant cases. If it produces an error, the target system receives a completely new deck with all new synthetic patient records. This process occurs until the system accurately selects and reports the records that meet the measure specification.
  • We use an iterative approach that is the core strength of our process because it continues to push the tested system to produce accurate results while preventing any gaming of the test decks to achieve a “pass.”
  • We do not base synthetic data on just 100 to 150 patients, but instead, generate test patients based on measure criteria, and can produce infinite iterations for test decks. This ability helps to ensure accuracy at every node of the measure calculation. In our experience, it generally takes between 5 to 10 decks per measure before systems can produce accurate results.
  • NCQA uses Continuity of Care Documents – the national interoperability standard for sharing records between systems – as the ingestion standard, not just QRDA1 that is reliant upon a measure specification that may not accurately send correct patient data.

NCQA’s eMeasure Certification Program assesses both the reliability and validity of a target system’s measure algorithm and data processing:

  • Validity of the measure algorithm is testing whether data reported are correct based on the measure specifications.
  • Reliability is largely based on data processing used to extract, translate, and load data for reporting purposes. We require vendors to complete a data processing roadmap to assesses how they map and process data. These self-attested roadmaps are invaluable resources for NCQA’s HEDIS Audit program.

To further ensure accuracy, certified auditors examine the measurement data to verify that it accurately represents the source data from reporting entities’ systems. NCQA rigorously audits Medicare Advantage plan data through this “primary source verification” process.

Auditing is critical because results determine bonuses and rebates that help higher-quality plans compete with better costs or benefits. However, Medicare’s Merit-based Incentive Payment System (MIPS) lacks this essential auditing process and thus generates unverifiable results. Algorithms could potentially supplement traditional auditing, and we would be happy to discuss this with you in greater detail.

Because accuracy is so important, we urge your office to develop requirements for reporting accuracy, similar to what the draft suggests, in establishing metrics to score the usability of electronic systems. Medicare could also promote better accuracy by providing bonuses to entities that have verified the validity of data they report from their systems. Mandating or recognizing use of the NCQA’s ONC-approved validation process could also help.

Our process should set the standard for testing and validating systems. The current low bar for verifying accuracy benefits no one and should not remain, given that we have a successful “best Practice” system. We therefore urge you to transfer eMeasure certification testing to NCQA, as you did when making the National Council for Prescription Drug Programs the national standard testers for e-prescribing.

eCQM Development

We also are aware of, and working to address, concerns raised in the draft about time and transparency for eCQM development and lack of eCQMs that are meaningful to all physicians.

It does take time to develop high-value measures and we are eager to explore alternate approaches that maintain the necessary level of rigor. NCQA’s process starts with identifying what issues to measure and the evidence-based guidelines on which to base the measures. We assemble expert panels of consumers, clinicians, payers, government, academics and others to build consensus on how to specify reliable and valid measures. We extensively test the proposed concepts in both simulated and real-world data environments to ensure validity, reliability and usability. We follow this with public comment for transparency and obtain further input, which we present to the expert panels to consider for incorporation into the final product as appropriate.

As suggested in the draft, we do not hold entities accountable for at least the first year of new measure results in case there are unexpected issues that arise with implementation. When we address all these issues, measures go to our standing expert Committee for Performance Measurement and NCQA’s Board of Directors for approval. Finally, we take our measures to the National Quality Forum for approval by their broad membership of relevant stakeholders.

We also are partnering with physician specialty societies to help them development measures relevant to their members. We believe a long-term strategy for eCQM adoption & development must include establishing core sets applicable for each type of clinician. When we have these core sets in place, Medicare should require all MIPS clinicians, in each specialty, to report on their core set.
This requirement will provide more valid and comparable results than the current policy in which each clinician can cherry pick measures on which they perform well, further skewing results.

Thank you again for the opportunity to comment on the draft. If you have any questions, please contact NCQA Director of Federal Affairs, Paul Cotton, at (202) 955-5162 or


Margaret E. O’Kane

  • Save

    Save your favorite pages and receive notifications whenever they’re updated.

    You will be prompted to log in to your NCQA account.

  • Email

    Share this page with a friend or colleague by Email.

    We do not share your information with third parties.

  • Print

    Print this page.