Depression in Black people goes unnoticed by AI models analyzing language in social media posts

Penn analysis found that models developed to detect depression using language in Facebook posts did not work when applied to Black people.

Methods researchers developed to detect possible depression through language in social media posts don’t appear to work when applied to posts by Black people on social media, according to a new analysis by researchers from Penn’s Perelman School of Medicine and its School of Engineering and Applied Science. The research, published in PNAS, points to an area to focus on for significant improvement and amplifies the importance of considering the intersection of race, health risks, and social media.

A person’s hand holding  a smartphone.
Image: Adobe Stock/Chanelle M/peopleimages.com

Work in the past uncovered that using first-person pronouns in posts (“I”) and certain categories of words (self-deprecating terms and expressing outsider feelings) in social media posts was predictive of depression among people who use social media. However, in analyzing Facebook posts from more than 800 people—a sample that included equal numbers of Black and white individuals, some who reported having depression and some who did not—the researchers found that the predictive qualities of the “predictive” words applied mainly to white people on social media.

“We were surprised that these language associations found in numerous prior studies didn’t apply across the board,” says one of the study’s senior authors, Sharath Chandra Guntuku, a researcher in the Center for Insights to Outcomes at Penn Medicine and an assistant professor (research) of Computer and Information Science in Penn Engineering. “We need to have the understanding that, when thinking about mental health and devising interventions for treatment, we should account for the differences among racial groups and how they may talk about depression. We cannot put everyone in the same bucket.”

When the types of words identified in the past as predictive for depression were plugged into an artificial intelligence-guided model, the researchers found that it performed “strong[ly]” among white people. However, they found that the model was more than three times less predictive for depression when applied to Black people who use Facebook.

Even when the researchers trained the artificial intelligence (AI) model on language used by Black people in their posts, the model still performed poorly.

“Why? There could be multiple reasons,” says the study’s lead author, Sunny Rai, a postdoctoral researcher in computer and information science. “It could be the case that we need more data to learn depression patterns in Black individuals compared to white individuals. It could also be the case that Black individuals do not exhibit markers of depression on social media platforms due to perceived stigma.”

Something that potentially confounded the existing depression-detection models, the researchers found, was that Black people tended to use “I” more overall in their posts. That includes participants in the study who did not report having depression.

Read more at Penn Medicine News.