doctor holding a tablet with references to AI on the screen

New Research Suggests: To Get Patients to Accept Medical AI, Remind Them of Human Biases

Patients who are initially resistant to AI-driven healthcare may become more open to it after being reminded of biases in human treatment.

Photography by

iStock

While people are growing more accustomed to AI-driven personal assistants, customer service chatbots and even financial advisors, when it comes to healthcare, most still want it with a human touch.

Given that receiving healthcare is a deeply personal experience, it’s understandable that patients prefer it to come from, well, a person. But with AI’s vast potential to increase the quality, efficacy and efficiency of medicine, a push toward greater acceptance of artificial intelligence-driven medicine could unlock benefits for patients and providers.

How, then, can the industry help nudge the public to feel more comfortable with medical AI?

According to a new study from researchers at Lehigh University and Seattle University, making the concept of bias more salient in their thinking can help.

Study explores ‘bias salience’

photo of Rebecca Wang
Rebecca Wang, associate professor of marketing

The study, published in the journal Computers in Human Behavior, found that patients were more receptive to medical recommendations from AI when they were made more aware of the biases inherent in human healthcare decisions. This “bias salience,” or making people more conscious of bias in decision-making, was shown to shift people’s perceptions.

“This increased receptiveness to AI occurs because bias is perceived to be a fundamentally human shortcoming,” said Rebecca J. H. Wang, associate professor of marketing in the Lehigh College of Business. “As such, when the prospect of bias is made salient, perceptions of AI integrity—defined as the perceived fairness and trustworthiness of an AI agent relative to a human counterpart—are enhanced.”

The study entailed six experiments with nearly 1,900 participants, demonstrating that when participants were reminded of human biases—such as how healthcare providers might treat patients differently based on characteristics like gender—they became more receptive to AI recommendations.

Participants were presented with scenarios in which they would be seeking a recommendation or diagnosis, such as a coronary bypass or skin cancer screening. They were then asked whether they preferred the recommendation to be made by a human being or by a computer/AI assistant.

Prior to being presented with the scenario, some participants were presented with prior screens intended to increase bias salience. These interventions included reviewing an infographic that highlighted common cognitive biases, describing a time when they were negatively affected by bias, including age-related bias for those over 50, and gender-related bias. The results showed:

  • When participants were made aware of potential biases in human healthcare, they rated AI as offering greater "integrity," meaning they perceived it as more fair and trustworthy.
  • While bias salience did not eliminate people’s general preference for human healthcare, high bias salience reduces resistance to medical AI, presumably because bias is more readily associated with human beings.
  • In the absence of bias salience, the subjectivity of human providers is often viewed as a positive, but when bias salience is high, patients place greater value on the perceived objectivity of AI.

The future of AI in medicine

The authors stress the importance of keeping human biases in mind at all stages of AI proliferation, from development to adoption. Developers of AI systems should both aim to minimize bias inherent in the materials used to train medical AI—a known issue in the current development of AI systems—and provide context about human bias when users encounter it.

Doing so could help providers capitalize on AI’s growing role in applications, such as diagnostics, treatment recommendations and patient monitoring. Those roles are only expected to expand as the industry is projected to invest more than $30 billion into AI medicine annually by 2029.

“By addressing patients’ concerns about AI and highlighting the limitations of human judgment, healthcare providers can create a more balanced and trusting relationship between patients and emerging technologies,” Wang said.

The study was a collaboration between Wang; Lucy E. Napper, associate professor of psychology at Lehigh; Jessecae Marsh, professor of psychology and associate dean at Lehigh; and Mathew S. Isaac, professor and chair of the department of marketing at Seattle University. The paper, “To err is human: Bias salience can help overcome resistance to medical AI,” is available online.

Wang recently discussed this research on an episode of Lehigh University’s College of Business IlLUminate podcast.

Story by Dan Armstrong

Photography by

iStock

Related Stories

A rooftop air system.

Lehigh University is a Core Institution of New $26 Million NSF Engineering Research Center

Lehigh is one of six core institutions of a new U.S. National Science Foundation Engineering Research Center focusing on developing sustainable refrigerants to address climate change. Lehigh’s team is led by David Vicic, the Howard S. Bunn Distinguished Professor of Chemistry.

illustration of diverse mask use

U.S. COVID Study: Asians, Lower-income Households Reported Less Confidence in Access to Services

A number of Lehigh faculty have joined in publishing a study in Healthcare that provides “nuanced insights into individuals’ access to health services during the pandemic.”

Illustration of woman wearing face mask, huddled under a consetellation of blue and yellow representations of the coronavirus

Understanding Resilience: The Impact of COVID-19

In the face of a global pandemic, a team of Lehigh researchers switches gears to study how individuals perceive, respond to and recover from the impact of COVID-19.