All Articles
CLINICIANS

DMC Bytes - What Would You Say to A Group of Employers About AI/ML-Driven Measures?

I recently was invited to offer brief comments during a meeting of several large employers. Many of them were evaluating proprietary technology offered by companies using artificial intelligence and machine learning to identify high performing clinicians, form tiered networks, centers of excellence and incentives for their employees to access the chosen physicians. The organizer of the event asked that I focus on some principles or themes that the companies should consider when being pitched by these companies. Below are a few of the points I made and a couple I have added. In retrospect, it would have been great to solicit DMC input as I prepared the talk – and will next time. I still welcome your comments and suggestions since the interest in and use of AI/ML to assess, rank and tier physicians/networks/institutions is only going to grow.

Measure at the system level for accountability. I suggested that measures for quality improvement work well at the individual level, but accountability for reporting and value-based purchasing is best placed at the practice or system level. This is based on the premise that a good clinician in an excellent system is more likely to consistently generate better results than an excellent clinician in a poorly performing system.  

Foster transparency. Clinicians want a clear understanding of methodologies used to assess care. This is a challenge with AI/ML because of concerns about algorithm bias and “black box” methods. Entities using AI/ML to guide or assess clinical quality need to spend sufficient time explaining, educating, and sharing as much as possible about their methodology because the lack of adequate transparency could undermine buy-in and acceptance of what might otherwise be unique, valid and useful ways to assess care.  

Create trust. As more sophisticated metrics are designed and used to evaluate clinical care and support value-based purchasing, the entities being assessed need to understand not only the underlying evidence used to design a measure but trust that the data used are validated and audited. This is also important for employers and others who want to use the results for benchmarking and comparisons to drive value-based purchasing. 

Leverage professionalism to drive change. Clinicians want to provide good care to the people who entrust them with their well-being and health. Momentum for change can be lost when clinicians feel that their commitment to deliver high quality care is questioned or accountability is misplaced.  

Avoid the appeal of very small measurement sets. We need to reduce burden associated with the current model of quality measure development, implementation, data collection and reporting. However, use of a small number of outcome measures to assess quality cannot generate the same insights as a set that also includes well-designed, evidence-based process measures and intermediate outcome measures. We need outcome measures – but we also believe and need (better) evidence-based process measures linked to those desired outcomes. I agree with those who suggest that there are too many measures, but I submit that it is the quality of the quality measures that contribute to the burden, not necessarily the quantity.

What would you have added? What would you have told me not to say? What did I get wrong? Let me know in the Community Forum.