Dissecting Algorithmic Bias: A Digital Quality Summit Presentation
July 22, 2021 · NCQA Communications
Algorithms are a widely-adopted—and promising— set of tools for improving health care delivery. But do they promote equitable care? Dr. Ziad Obermeyer seeks to answer that question through his research on the intersection of machine learning, medicine and health policy. Last week, he presented on Dissecting Algorithmic Bias at the Digital Quality Summit and discussed what he’s learned about the advantages and limitations of algorithms.
“I got into this line of research from a fundamentally optimistic point of view on algorithms… [A]s a clinician and as a researcher, I see many, many cases where humans make very bad decisions, and where algorithms and data stand to make a huge difference,” he noted. “If we build [an algorithm] the right way… that algorithm then becomes, instead of a tool that reinforces those inequalities… a tool for getting resources to people who actually need resources.”
Identifying Algorithm Bias: A Case Study
As an example of algorithm bias, Dr. Obermeyer used a case study on an algorithm that directs preventive care to 70 million chronically ill patients a year. This population tends to have high costs and poor care outcomes, so the goal is to use “high-risk care management” to treat them before they get sick. Dr. Obermeyer was looking for racial bias in the algorithm.
Hypothetically, all patients with the same risk score for illness under the algorithm have the same care needs and should receive the same treatment, regardless of skin color. But Dr. Obermeyer found that at the same score, Black patients had worse health than White patients. Why was that?
He began by looking at what was working correctly: The predictions for health costs were accurate. Then he looked at what wasn’t: Black patients had lower health care costs for the same conditions.
Eliminating Label Choice Bias
Dr. Obermeyer stressed the importance of holding an algorithm accountable for its “ideal target”: what it should be predicting (health, in this case) vs. what it is actually predicting (costs). Because White patients often have better access to health care, as well as access to better quality health care, they tend to have higher health care expenditures compared to Black patients with the same severity of illness. As a result, an algorithm which predicts costs instead of health will tend to recommend more care for White patients, reinforcing that bias. This is an example of what Dr. Obermeyer calls label choice bias.
Luckily, detecting bias is the first step toward fixing it. After the algorithm was retrained to predict health, it was 84% less biased—a result reinforced by follow-up with stakeholders.
Moving Forward: “The Playbook”
Algorithms are complex and can be difficult to maintain. To deal with this, Dr. Obermeyer and his research partners at the University of Chicago’s Center for Applied Artificial Intelligence created the Algorithmic Bias Playbook, detailed instructions for creating and adjusting algorithms to eliminate bias through four main strategies:
- Designate a high-level, strategic steward responsible for algorithms that is advised by a diverse, engaged group, both internally and externally.
- Maintain an inventory of algorithms and update it frequently.
- Document an algorithm’s purpose, ideal target and performance, and hold it accountable.
- Fix or delete biased algorithms as they arise.