A Penn team’s push to make research more inclusive

Penn’s Palliative and Advanced Illness Research (PAIR) Center is working to bring more underrepresented racial and ethnic backgrounds into their research, and to train AI models to be free from bias.

Research is a driving force of medical progress—but is it truly inclusive of the voices and experiences of those it seeks to help? 

The way research is conducted can often leave out important voices, like people from underrepresented racial and ethnic backgrounds, those who speak languages other than English, or those with limited literacy. Rachel Kohn, an assistant professor of medicine in the Division of Pulmonary, Allergy and Critical Care and core faculty in Penn’s Palliative and Advanced Illness Research (PAIR) Center, is looking to change that.

A doctor and patient.
Image: iStock/shironosov

Health care research, while indispensable for advancing medical knowledge and improving patient outcomes, has long grappled with a glaring issue: the lack of diversity and inclusivity. “In health care, we aim to leave no one behind. But when certain demographics are excluded or marginalized in research, we're failing to uphold that promise," says Kohn. A study led by Kohn, published in the Journal of General Internal Medicine, addresses the underrepresentation and disparities prevalent in research practices.

Working with a group of colleagues through Penn’s Joint Research Practices, Kohn developed a clear goal: to make academic research more inclusive, equitable, and accessible for everyone. What would follow was years of investigation to discuss findings and refine their focus. Subgroups were formed to delve into specific areas to ensure a well-rounded perspective. What Kohn and colleagues have now developed is a set of guidelines covering everything from how participants are paid to how research findings are communicated.

Kohn raises another concern in the field and its impact on diversity, equity, and inclusion in academic research: artificial intelligence, or AI.

AI models don’t begin full of information—they must be fed source material to interpret. What if they are continually fed data that is biased or misinformed?

“Research findings are more and more frequently being fed into AI models to serve as clinical decision support systems for patient care which can have far-reaching effects,” explains Kohn. But concerns are heightened at the possibility of biased data being fed into AI.

“That biased data could propagate into clinical decision support systems, research questions, trial eligibility, risk adjustment, or hospital and quality improvement assessment. One major concern about this process is that clinicians rarely know the source of the data and assume that they should take the decision support recommendation at face value without pausing to consider the algorithm inputs. ‘Algorithmic bias’ is a hugely burgeoning field trying to address this very issue,” explains Kohn.

This story is by Matt Toal. Read more at Penn Medicine News.