Giving Compass' Take:
- Law professor Nicholson Price answers questions about AI and bias in healthcare, exploring how more technology could potentially increase health disparities.
- How can tech and AI be harmful to minority groups seeking healthcare? What are ways for donors to support inclusive healthcare tech?
- Read more about the challenges and solutions of AI in healthcare.
What is Giving Compass?
We connect donors to learning resources and ways to support community-led solutions. Learn more about us.
Data sources that “teach” artificial intelligence could amplify and worsen disparities in health care, says law professor Nicholson Price.
Those data sources are not representative and/or are based on data from current unequal care, says Price, a member of the University of Michigan’s Institute for Healthcare Policy & Innovation.
In a recent article in Science, Price and colleagues Ana Bracic of Michigan State University and Shawneequa Callier of George Washington University say these disparities are happening despite efforts in medicine by physicians and health systems trying strategies focused on diverse workforce recruitment or implicit bias training.
Here, Price answers questions about bias and AI in healthcare:
What is an example of anti-minority culture?
There are depressingly many examples of cultures that include deeply embedded biases against minoritized populations (that is, populations constructed as minorities by a dominant group). We focus on Black patients in medicine in the article (who are stereotyped as being less sensitive to pain, among a host of other pernicious views), but we could just as easily have focused on Native American patients, transgender patients, patients with certain disabilities, or even women in general (who, even though they’re a numerical majority, are often still minoritized).
So this influences research participation/recruitment and AI, such as Black participants declining participation?
Exactly. We start the piece by describing patterns of clinical care that involve self-reinforcing cycles of exclusion, but then step back to show how these dynamics also occur in patient recruitment for big data and then AI. The research participation story actually relies a lot on an earlier study (that) showed different rates of consent for big-data research participation (in the Michigan Genomics Initiative) for members of different minority groups.
In this project, we build on that work (and other work on research participation by Shawneequa Callier, the third coauthor of this piece) to lay out cyclical dynamics, where bias leads to inadequate recruitment, leads to lessened engagement, resulting in perceptions of minoritized patients as less interested in research, and a repeating, strengthening cycle. And the same sort of pattern shows up in medical AI.
Read the full article about AI and health disparities by Jared Wadley at Futurity.