Demystifying AI and
Machine Learning

August 9, 2022 · Andy Reynolds

A highlight of the Digital Quality Summit 2022 was the panel discussion, Demystifying AI and ML: How Algorithms Can Improve the Quality of Clinical Workflow.

In a wide ranging conversation about artificial intelligence and machine learning, Jonathan Chen, MD, PhD, Assistant Professor at Stanford Center for Biomedical Informatics Research, Deepti Pandita, MD, Chief Health Information Officer at Hennepin Healthcare and moderator Zeshan Rajput, MD, Principal at MITRE, discussed:

  • Having realistic expectations of AI.
  • How to think about computers that “think.”
  • The use case that’s best suited for AI and ML.
  • Keeping correlation and causation in context.
  • Finding, fighting and preventing algorithmic bias.

Expectations Are Everything

AI and ML have been marketed in care delivery, and in health care in general, as:

  • The key to wellness.
  • A way to reduce costs.
  • A solution to labor shortages.
  • An answer to physician burnout.
  • A guardrail for quality and safety.
  • A driver of consumer behavior change.
  • The backbone of clinical decision support.

The panel agreed that AI and ML can help in some of these cases, in small ways, but it’s important to separate hope from hype. If when you think about artificial intelligence, you picture HAL 9000 from 2001: A Space Odyssey or J.A.R.V.I.S. from Iron Man—computers that can think independently—you’ll probably be disappointed. Says Dr. Chen, “General AI systems that can think and reason through complex problems… we’re nowhere close to such technology.”

Thinking About Computers That “Think”

“Artificial intelligence” refers to a computer that mimics intelligent behavior. Dr. Chen gave an example of basic AI from 1980s rules-based systems: A patient has fever and a low blood pressure. The computer predicts the patient has sepsis and suggests the physician do something about it. But that’s usually not what people mean when they refer to AI.

“When people are selling you AI, almost always it is machine learning,” Dr. Chen said, “A set of tools and techniques to automatically make inferences about complex data sources.”

With machine learning, you don’t program a computer with a rule; you give the computer data to consume. The computer sifts through the data, seeking the proverbial needle in a haystack: Here are 100 patients with sepsis and 100 without sepsis. How are they different?

The Sweet Spot: Decision Support

Automating or aiding decision making is one area where these systems excel. Risk calculators, like the Pulmonary Embolism Severity Index, aren’t new. “Many solutions being sold today as AI or ML use an algorithm to answer a narrow question. It’s just become a lot more efficient to do that with different data sources and methods,” Chen said.

He advised, “Rather than thinking, ‘What can I predict?,’ you should be thinking, ‘What’s the thing… I already know works but I don’t want to deploy to everybody? I need to find people… who might get the most value out of this intervention.”

Chen encouraged the audience to think in terms of decisions that are:

  • Actionable: You must be able to do something you couldn’t already do.
  • Arbitrary: Predictions should be reserved for questions that cannot otherwise be answered.
  • Ascertainable: You need a way to check if the prediction was right.

“As a doctor, I’ve rarely wished the computer would give me an ICD-10 code,” said Chen. “Instead, …we should be thinking about… information-gathering and collation. Also, performing the next best steps… What would most other doctors do in this case?”

During day-to-day patient encounters, Dr. Pandita said, AI and ML can integrate information from many sources into the clinical workflow, to give physicians context and help them make informed decisions. These systems can reduce providers’ cognitive burden, freeing their energy and attention to engage with their patients.

Dr. Pandita shared a paper from Medical Informatics Research suggesting that these technologies improve error detection, drug management and patient stratification. She also noted that chatbots, a form of AI, can help hospitals triage more patients. Early in the pandemic, chatbots gathered answers to simple questions and advised patients whether they needed to be tested for COVID.

Correlation vs. Causation

Dr. Chen suggested a hypothetical example of how AI and ML can go wrong: A model to predict hospital patients at greatest risk of death. The model might notice that patients who have a chaplain visit are most likely to die. But the model’s likely recommendation—eliminate pastoral palliative care—would be a gross miscalculation.

“It’s all correlation versus causation,” Chen explained. “The hardest thing to teach a computer is common sense. You told it to accurately predict death. It’s going to get its hands on any data it can and try to make that prediction.”

We know smoking causes cancer. But AI and ML don’t. They “don’t know and they don’t care,” warned Dr. Chen. “All they’re trying to do is be accurate in predicting what’s going to happen in the future. …That’s all we told them to care about.”

Fighting Bias

There is deep concern that the underlying algorithms in AI and ML systems could be unintentionally biased. Dr. Pandita pointed to the potential for a disconnect with algorithm developers, whose background is predominantly in computer science. They “may not know that data needs to be looked at from the lens of diversity, equity and inclusion.”

“The majority of social determinants—at least 80% of them—lie outside of the health care system,” observed Dr. Pandita. She offered examples of other data that algorithms should include:

  • Schools.
  • Housing.
  • Food scarcity.
  • Neighborhood characteristics.

“The majority of machine learning is not even close to that, if you’re simply looking at EHR data as the source of that information,” Dr. Pandita said. Similarly, her experience at Hennepin Healthcare in Minneapolis shows “one-size-fits-all does not fit everywhere.”

The Hennepin safety net health system sees mainly Medicaid enrollees. More than 1 in 3 patients speak a language other than English or come from a marginalized community. “The type of machine learning AI algorithms I need in my system may be very different from [those needed in] a mature system,” she explained.

Drs. Chen, Pandita and Rajput gave the Digital Quality Summit audience a lot to think about. Check back on blog.ncqa.org for other Summit highlights.

  • Save

    Save your favorite pages and receive notifications whenever they’re updated.

    You will be prompted to log in to your NCQA account.

  • Email

    Share this page with a friend or colleague by Email.

    We do not share your information with third parties.

  • Print

    Print this page.

Section background
Section graphic
Section element
Section element
Stay Informed

Get updates, announcements and trending topics

Join 53k+ health care professionals