September 13, 2019
Seema Verma, Administrator
Centers for Medicare & Medicaid Services
7500 Security Blvd.
Baltimore, MD 21244
Dear Administrator Verma:
Thank you for the opportunity to comment on the proposed 2020 Revisions to Medicare Physician Fee Schedule Payment Policies. The National Committee for Quality Assurance (NCQA) supports the proposed move, starting in 2021, to Merit-Based Incentive Payment System (MIPS) Value Pathways (MVPs). MVPs include smaller sets of specialty-specific, outcome-based and Alternative Payment Model (APM) aligned measures, as well as more outcomes and population health measures. Moving to MVP core sets for similar types of clinicians is essential for accurate, comparable results, and a cornerstone of NCQA’s efforts to improve measurement results accuracy, which MIPS urgently needs.
MVPs also can provide more timely feedback to clinicians with claims, registry and electronically submitted data, which aligns with NCQA efforts to move to digitized electronic quality measure reporting. Electronic reporting can reduce burden, measure more of what matters and improve accuracy. CMS can further enhance digital measurement accuracy with financial incentives for moving to electronic reporting and validation with rigorous auditing and certification requirements.
Current MIPS bonus points for end-to-end electronic reporting are one way to provide needed financial incentives, and we believe you should increase, rather than phase out, these bonus points. Since 2010, the federal government has provided over $30 billion in incentives for the industry to move to a more digital data world. We will need additional incentives to leverage this investment, as MVPs alone are not enough to support all needed changes for digital reporting.
Also, while we see value in linking quality measures with cost measures and Improvement Activities (IAs) in MVPs, we have concerns about establishing an unfunded mandate for measure stewards to do this. If you choose to proceed with this proposal, you need to develop a process to provide relevant data to measure stewards and a way to fund the work involved.
Finally, we do not support the proposal to set flat benchmarks for HEDIS® Diabetes: Hemoglobin A1c (HbA1c) Poor Control (>9%) and Controlling High Blood Pressure measures. Doing so would be arbitrary and not prevent overtreatment as suggested.
Detailed comments on these and other issues in the proposed rule are below.
Measurement Accuracy: We applaud the MVP proposal because moving to more standardized measure sets can improve results’ accuracy and comparability. It is essential that clinicians not cherry pick measures and instead report measures specific to specialties or clinical conditions. However, this is just one of several steps CMS could take to strengthen the accuracy of results. You also need:
- Robust prospective, in-line data auditing before reporting,
- Thorough testing of systems for reporting electronic quality measure reporting (eCQM), and
- Strong support and incentives for moving to digitized electronic reporting of quality measures.
Rigorous Auditing: Obtaining accurate MIPS quality results requires the same rigorous auditing for MIPS measures as required now for Medicare Advantage Plan Star Ratings measures. Medicare Advantage audits are concurrent reviews assessing all quality data and correct data processing problems before reporting. MIPS audits only look retrospectively at a few pieces of information in a small random clinician sample. That is insufficient given the projected $1.168 billion in MIPS 2020 performance year/2022 payment year payment adjustments.
With health plans, certified vendors aggregate data from multiple sources that we audit. For MIPS, thousands of practices are reporting directly and others report through intermediaries, which poses different challenges. You therefore need to provide glide path options for quality reporting but require validation for options to report eCQMs. For example, one option for reporting MIPS is to use a registry or a certified EHR to report eCQMs. However, registries may self-attest their validation model while EHRs must receive ONC certification. Both need certification. We further encourage you to provide greater incentives to practices opting to report using a certified system, as well as a prospective audit of the data used in the reporting process. We appreciate the challenges in moving to greater rigor needed in MIPS audit and would be happy to work with you on this.
Thorough Testing of eCQM Reporting Systems: Accuracy in MIPS and other value-based payment (VBP) programs requires rigorous testing of systems that report measures. The Office of the National Coordinator for Health IT (ONC) approved several labs to test and certify eCQM systems. However, most of these labs do not have any quality measure expertise and rely on ONC’s Project Cypress testing tool that, until recently, covered only 80% of measurement logic. An update in 2019 still covers just 95% and systems certified at 80% do not have to recertify or retest to the more rigorous standard or recertify for updated measure specifications. NCQA’s electronic clinical quality measure certification program is the only one approved by ONC that reviews 100% of the logic for each measure and requires recertification for updates to measure specifications. Project Cypress only includes test decks without errors, while NCQA includes errors to see if systems detect data flaws. Ensuring trust in MIPS, and other value-based payment arrangements, will require NCQA’s level of eCQM system testing.
NCQA is the only ONC-approved lab with extensive quality measurement expertise, concentrated eCQM validation focus and mandatory annual recertification and has Premier alliance, Oklahoma’s MyHealth Access Network and Diameter Health endorsement.
- Premier said NCQA is “considerably more robust and rigorous” and “easier to use” because NCQA uses industry-standard formats, multiple test decks with digitized results, a web-based interface and streamlined discrepancy validation.
- MyHealth said NCQA provides “assurance of validity and the process controls necessary to detect issues early and communicate clearly” and integrity of results.
- Diameter Health’s comparison found NCQA to be “more robust and comprehensive.” NCQA “delivers substantial value in verifying software accuracy and consistency,” is “more aligned” with real world data and has safeguards to minimize manipulation.
We therefore urge requiring eCQM certification through our much more robust testing system.
Strong Support for Digitized Reporting: CMS, NCQA and others share the goal of extracting quality measurement data from electronic health records (EHRs), health information exchanges (HIEs), registries and other electronic clinical data sources. This can reduce reporting burden, improve accuracy and let us measure more of what matters with the richer clinical data in electronic sources not found in claims used for measures today. However, many clinicians and other providers will need assistance in developing capacity for digitized electronic reporting. This includes the cost of obtaining certified eCQM systems, upgrades to internal data systems and technical assistance in how to support digitized reporting.
There may also be potential to develop a Promoting Interoperability measure to assess and reward how well provider systems support digitized electronic clinical quality measure reporting. While we do support CMS’ approach to enable data reporting from the various data systems listed above (e.g., EHRs, Registries, HIEs), we urge CMS to ensure consistent validation of reporting systems. For example, CMS has authorized Qualified Clinical Data Registry (QCDR) and qualified registries to self-attest to good quality reporting instead of requiring validation as mandated for certified electronic health record technology (CEHRT) systems – even when both systems report the same eCQM. This inequity in validation undermines the MVP direction and the national comparability that CMS seeks to achieve. We therefore urge CMS to hold any system reporting eCQMs to the same level of validation to assure trust in results they report.
To address the costs of certification and internal system upgrades, you should strengthen incentives for building capacity for digitized digital quality measures reporting. One way to do this is to continue to increase existing bonus points in MIPS for end-to-end electronic reporting. We therefore do not agree with the proposal to maintain the 10% cap on end-to-end electronic reporting points and suggestion that the MVP can incorporate eCQMs without end-to-end reporting bonus points.
We do not agree that you do not need to increase MIPS bonus points for end-to-end electronic reporting or that moving to MVPs alone will not provide sufficient incentives or fairly compensate clinicians and other stakeholders for making all needed upgrades.
To further build electronic reporting capacity, we suggest providing technical assistance, especially to smaller practices and underserved areas most likely to lack needed expertise and/or resources.
You could provide such assistance in much the same way that your Transforming Clinical Practice Initiative aided practices in sharing, adapting and further developing their comprehensive quality improvement strategies.
Another potential strategy to support electronic reporting is to develop a Promoting Interoperability measure to assess and reward how well provider systems support digitized electronic clinical quality measure reporting. Such a measure might include metrics for full interoperability, rigorous eCQM reporting certification and verification beyond attestation that systems fully support data exchange. We would be happy to work with you to develop such a measure.
We therefore urge you to continue and increase end-to-end electronic reporting bonus points, provide electronic reporting technical assistance, and develop an electronic reporting measure.
MIPS Value Pathways: We applaud the move to MVPs that will focus MIPS clinicians on smaller sets of specialty and disease-focused measures and end the measure cherry picking that now skews MIPS results, impacting bonuses, penalties and usefulness for comparison. MVPs’ potential to provide more timely feedback to clinicians is vitally important, as is its focus on patient-reported outcomes measures and enhanced data for patients.
We support the proposed guiding principles for MVPs:
- Limited sets of measures and activities that are meaningful to clinicians which reduce or eliminate burden and simplify scoring.
- Measures and activities that provide comparative performance data valuable to patients and caregivers in evaluating and choosing clinicians and care.
- Measures that encourage performance improvements in high priority areas.
- Using Alternative Payment Model (APM) measures where feasible and linking cost and quality measurement to reduced barriers to moving to APMs.
We agree that linking quality measures with relevant cost measures and IAs can help make the overall Quality Payment Program (QPP) more coherent. We have concerns, however, about the proposed requirement for measure stewards to link measures to existing and related cost measures and IAs.
As HEDIS measure stewards we are happy to share any information we have on a given measures’ implications for cost or IAs. However, it is not reasonable to expect measure stewards to do unfunded work via a regulatory mandate for instances in which we do not have access to data to do so. If you choose to proceed with this proposal, you need to develop a process to provide relevant data to measure stewards and a way to fund the work involved.
Similarly, we appreciate your intention to work with us to adapt HEDIS Acute Hospital Utilization and Emergency Department Utilization measures for MIPS and are pleased to work with you on this. However, we note that this work also requires additional resources.
To promote greater cohesion among Promoting Interoperability and other performance categories you could give extra credit for clinicians who meet criteria in more than one category with one activity. An example would be when clinicians conduct care coordination using electronically exchanged data.
Care coordination measures rely on data from a variety of sources that is enabled by electronic data and speak to collaboration across settings. This will promote cohesion and give clinicians tangible examples of the value in using electronic data to improve care. Similarly, with performance measure collection, you should provide extra credit for electronic submission.
MVP IAs should be specialty specific and condition-focused as well as focused on patient experience and engagement, team-based care and care coordination. It is not a case of either the former or latter, but a need for all of the above. IAs also should focus both on improving quality and cost measures within an MVP and activities relevant to the practice. Given that you need to evaluate IAs, IAs should be measures which provide the easiest way to do evaluations.
For outlier analysis or other actionable feedback, clinicians would benefit most from this if shared as frequently as feasible but at least quarterly. This applies to administrative claims-based feedback, as well. Frequent feedback better informs care delivery and identifies quality improvement efforts’ impact.
For MVP choice, there should be a core set for each specialty or subspecialty based on a clinician’s training and claims history to allow apples-to-apples comparisons among those providing similar services. In addition, you could allow clinicians to also select additional measures for their own quality improvement efforts or to highlight areas in which they believe they can excel. Each MVP should include a mandatory core set of measures and activities but again offer options in addition to the core set that clinicians can tailor to their practice. Measure and activity criteria should prioritize outcomes including patient-reported outcome measures (PROMs) and other high priority measures. You should not limit them to an arbitrary number or only cost measures aligned with quality measures.
We do not support a “Call for MVPs” like the Call for Measures. Instead, we urge you to start with MVPs for high cost bundled procedures with the most potential savings, as well as primary care bundles that can serve as AAPM building blocks. You should organize MVPs around specialties for specialty practices, and around public health priorities for primary care. You also should begin with just one MVP per specialty or subspecialty and then adapt as needed going forward.
We share your concern about obtaining comparable data and developing a single benchmark with multiple collection types. This requires investing in research to see if it is possible to compare different collection types fairly and accurately. We know this is complex and are happy to work with you on it.
Obtaining reliable performance information using patient reported data is a priority as we move to a digital quality measurement world, especially given the many ways patients can now generate health data. We incorporate patient-generated data in our HEDIS measures if a clinician providing primary care collects the data while taking a patient’s history and enters it in the patient’s legal health record. We list HEDIS measures with member reported information in Appendix B.
Incorporating patient generated data into measures at the individual clinician level requires testing and validating each measure’s use of such data at the clinician level to assure reliability of results. This, of course, requires resources. Similarly, patient experience measure reporting at the individual clinician level requires validating the measures at that level.
To include PROMs and incorporate patient voices, you should encourage use of innovative tools that put patients at the center of care and offer extra credit to clinicians who use them. This is especially important for high priority-high cost procedures like hip and knee replacement that may have PROMs in MVPs. You should incorporate single questions or brief patient experience surveys in MVPs by weaving them into updated patient experience of care surveys (discussed in detail below).
For clinician performance data to include in a single “value indicator” useful to patients you should use measures that assess effectiveness of care, care coordination and patient experience.
Quality: We have been very concerned that MIPS lacks apples-to-apples comparable data. There is an urgent need for mandatory core sets for accurate comparison and phasing out self-selection that skews results. The proposed move to MVPs can greatly improve the current situation.
Addition & Removal of Measures: The measures you propose adding or removing from MIPS in general are reasonable. Specifically, we support:
- Adding CMS’ All-Cause Unplanned Admission for Patients with Multiple Chronic Conditions measure.
- Replacing the Pneumococcal Vaccination measure with the comprehensive HEDIS Adult Immunization Status measure.
However, we disagree with a few of the proposed changes:
- Replacing the HEDIS Persistence of a Beta-Blocker Treatment After a Heart Attack with Coronary Artery Disease (CAD): Beta-Blocker Therapy – Prior Myocardial Infarction (MI) or Left Ventricular Systolic Dysfunction (LVEF < 40%). The HEDIS measure assesses whether patients get persistent medication over six months, the proposed replacement assess only if patients get the drug once.
- Replacing the HEDIS Medication Reconciliation Post Discharge measure with Documentation of Current Medications in the Medical Record. The proposed replacement is not appropriate for patients who are at high risk post discharge, so we urge you to include both measures.
- Removing Maternal Depression Screening. The measure is appropriate for use in episode-based care attributed to obstetrician-gynecologists.
- Not implementing the exclusion for adults 80 and older with frailty for the following measures: Controlling High Blood Pressure, Persistence of a Beta Blocker Treatment After a Heart Attack (if retained), and Osteoporosis Management in Older Women Who Had a Fracture. This exclusion is critical for focusing the measures on the population most likely to benefit from the measured services. Without this exclusion, these measures will be out of alignment with what we require for HEDIS reporting.
Flat Benchmarks: We do not support the proposal to set benchmarks based on flat decile percentages for HEDIS Diabetes: Hemoglobin A1c (HbA1c) Poor Control (>9%) and Controlling High Blood Pressure measures. Doing so would be arbitrary and would not prevent overtreatment as suggested but instead could discourage appropriate care for individual patients. Both measures have exclusions for long-term care, frailty and advanced illness. The blood pressure measure’s 140/90 threshold prevents overtreatment for patients with advanced disease or comorbidities for whom a <130/80 target is not appropriate and does not prohibit clinicians from trying to achieve the <130/80 target in cases when that is appropriate. The diabetes measure’s target of <9% is appropriate for all patients. Rather than flat benchmarks, we suggest developing benchmarks based on actual performance, with a cap based on rates for the highest performers and partial credit for achieving progress toward the targets.
Data Completeness: We support the proposal to raise the quality measures’ data completeness criteria and encourage you to raise from 60% to 80% rather than 70% of all eligible patients for each measure. We encourage you to continue steadily raising this threshold over time, as 100% is ultimately necessary to identify topped-out measures and prevent gaming. Mandatory reporting of core population-based measure sets would show where there is actual limited performance variability above 95%, which defines genuinely topped-out measures. A higher data completeness threshold is appropriate for extremely topped out measures retained due to limited availability for specialties.
We also support realigning the MIPS measure update cycle with eCQM annual update process.
Improvement Activities (IAS): We do not support removing the names of the approved Patient-Centered Medical Home (PCMH) and Patient-Centered Specialty Practice (PCSP) programs that earn clinicians full auto-credit for the MIPS IA category.
The evidence based formal PCMH criteria help to standardize effective practices and attributes in recognized practices, help payers standardize payments and compare performance. Without specifying which programs qualify, clinicians will lack assurance that participation in a given PCMH or PCSP program will earn the full IA 15 points. Specifying and updating a public list of any additional qualified programs would more constructively address concern about excluding any programs that have the required national scope without creating uncertainty.
Promoting Interoperability: We support the direction of PI proposals and offer specific comments on reducing burden, incorporating patient-generated data, opioids and other related PI issues.
Reducing Burden: Given your focus on reducing reporting burden, we urge you to take another burden reducing step by accepting data feeds from MACRA-approved PCMH program sponsors on which of their recognized practices meet Promoting Interoperability criteria. NCQA’s PCMH program includes standards that align with Promoting Interoperability measures, and we could easily reduce burden by sending CMS data feeds of PCMH clinicians who meet these standards. We provide a crosswalk of Promoting Interoperability measures related to NCQA PCMH standards in Appendix A. This would eliminate the need to double report information to CMS that clinicians already documented to NCQA.
Opioids: We do not support the proposals to make prescription drug monitoring (PDMP) program queries optional and remove the Verify Opioid Treatment Agreement measure. We have a “Controlled Substance Database Review” standard in our PCMH program and added an “Opioid Treatment Agreement” standard to all our patient-centered clinician programs, including PCMH and PCSP, this July. Challenges and variation are real but not insurmountable and maintaining strong incentives for PDMP queries and treatment agreements can help drive needed improvements. However, we support making the e-prescribing measure worth up to 10 points if you finalize the PDMP query proposal.
- We support providing bonuses for certified FHIR-based API adoption before ONC’s final rule’s compliance date If ONC finalizes its proposed FHIR-based API criteria.
- We support SAFER EHR safety guidelines for clinicians at inpatient facilities and systems but believe they may not be appropriate for ambulatory settings. We suggest, however, requiring vendors and systems to proactively test with SAFER as they implement EHRs or make upgrades.
CAHPS: The addition of narrative questions with free text responses is intriguing and has potential to provide richer feedback and insights for consumers in terms of choosing a physician. However, it is premature at best to score results for inclusion in an accountability framework.
Collecting CAHPS at the individual clinician level would be challenging given small numbers for most clinicians that could skew results and the current lag time and lack of specificity in which types of patients provide what feedback. Data at the practice level would be more reliable and useful for consumers and would require a stratified sampling approach based on patients who had an actual visit to a physician. You could pilot a practice level approach through a demonstration project with a limited number of practices.
As for letting patients provide a score for overall experience and satisfaction rating, we note that experience is much more nuanced and detailed than simple satisfaction. There is value in allowing consumers to rate their overall experience, which is consistent with the “reporting of experiences” wording of CAHPS questions. However, rating their overall satisfaction introduces a level of subjectivity that is not consistent with CAHPS structure.
Regarding CAHPS collection, we acknowledge that mail and phone CAHPS response rates have been dropping drastically, a phenomenon across healthcare and nonhealthcare surveys. CAHPS needs updated data collection methods to provide more robust and reliable results. For email and web data collection, practices need patient email addresses on file. Deploying surveys through an established patient portal that patients are already familiar with may be helpful to encourage participation. Other options include modular survey administration where patients receive only a portion of survey questions at one time, which may also encourage participation.
Scoring and Thresholds: We support the proposed scoring adjustments that align with statutory requirements. We also support increasing the performance threshold for earning MIPS bonuses from 30 to 45 points in 2020 and to 60 in 2021. This should give more genuinely high-quality clinicians meaningful bonuses that have been small due to many exclusion and cherry picking that reduces the number subject to penalties that fund the bonuses. The same holds true for increasing the exceptional performance bonus threshold from 75 to 80 points for 2020 and to 85 in 2021.
Physician Compare: We appreciate your interest in reporting Report MIPS score in each performance category and final scores for MIPS eligible clinicians, as well aggregate data on range of scores. However, we question doing so at this time given the substantial concerns about accuracy of results and cherry picking of measures. Posting skewed data would be unfair to genuinely high-quality clinicians and misinform consumers.
Evaluation & Management: We support increasing evaluation and management service reimbursement that should raise payments to primary care by 12% starting, as well as to reduce related documentation requirements. This will help to address the long-standing undervaluation of primary care.
Opioids: We support expanding opioid use disorder (OUD) coverage to bundled episodes, including via telehealth, and to Opioid Treatment Programs approved by the Substance Abuse and Mental Health Services Administration. This will provide much-needed increased access to treatment, especially medication assisted treatment which is the most effective. We particularly applaud the proposal to not charge beneficiary copays for these bundles, which should be a permanent policy given the devastating effects on individuals, families and society from untreated OUD.
Care Management Services: We share concerns that these services that are so critical for high quality patient care remain underutilized. This is likely due to complex rules, difficult documentation requirements, strict beneficiary qualifying criteria and low relative reimbursement rates.
We support your proposals that begin to address some of these issues and to designate remote patient monitoring as a care management service. However, you should further increase payment for these services that are so effective in improving outcomes and generating savings. We also question your focus on potential “overbilling” given that these services account for so little payment and the minimal evidence of impropriety. Given the limited experience with these new services and codes, you should view overlap with services like interprofessional consultation as a positive step in moving toward more coordinated care.
Other Payer Medical Home Model AAPMs: We oppose the proposal to restrict Other Payer Medical Home Model (MHM) Advanced Alternative Payment Models (AAPMs) to those formally aligned with Medicare. The MHM has hundreds of participating private payers, thousands of medical homes and is thus likely to be a key pathway for private sector models to achieve AAPM status. Not counting these models toward the All-Payer Combination Option would be unfair to practices that invested in value-based clinical transformation, demonstrated positive results and expect to qualify as Other Payer AAPMs under the MHM standard. We urge you to rescind this proposal and allow private sector MHMs to qualify for the All-Payer Combination Option.
Thank you again for the opportunity to comment on the proposals. If you have any questions, please contact NCQA Director of Federal Affairs, Paul Cotton, at (202) 955-5162 or firstname.lastname@example.org.
Margaret E. O’Kane