What big data reveals about online extremism

As extremist groups and fringe movements like QAnon have gained mainstream awareness, their ability to rapidly proliferate misinformation and conspiracies has put social media platforms under heightened public scrutiny. Facebook, Twitter, and other tech companies have been reprimanded by Congress and media outlets alike for failing to seriously address online radicalization among their users. As the United States has grown increasingly politically polarized, the question of whether these platforms’ algorithms—unintentionally, or by design—help users discover extreme and misleading content has become more urgent.

Homa Hosseinmardi
Homa Hosseinmardi, senior research scientist and lead researcher on the PennMap project with Penn’s Computational Social Science Lab. (Image: ASC)

But as Homa Hosseinmardi points out, one major platform has surprisingly gotten less attention: YouTube. Hosseinmardi, a senior research scientist and lead researcher on the PennMap project with Penn’s Computational Social Science (CSS) Labpart of the School of Engineering and Applied Sciences, Annenberg School for Communication, and the Wharton School—notes that while it’s often perceived as an entertainment channel rather than a news source, YouTube is perhaps the largest media consumption platform in the world.

“YouTube has been overlooked by researchers, because we didn’t believe that it was a place for news,” Hosseinmardi says. “But if you look at the scale, it has more than two billion users. If you take that population and multiply it with the fraction of news content watched on YouTube, you realize that the amount of information consumption on YouTube is way more than on Twitter.”

Hosseinmardi’s research is driven by questions about human behavior, especially in online spaces. In 2019, she joined the CSSLab, directed by Stevens University Professor Duncan Watts. In her work with the Lab, Hosseinmardi uses large-scale data and computational methods to gain insights into issues including media polarization, algorithmic bias, and how social networks affect our lives.

Several years ago, a team of researchers including Hosseinmardi and Watts became interested in the relationship between online radicalization and YouTube news consumption. To what extent do YouTube’s algorithms foster engagement with heavily biased or radical content, and to what extent is this influenced by an individual’s online behavior? She aims to answer this question: If people start from somewhere on YouTube, after watching a few videos consecutively, will they end up in the same destination?

This story is by Alina Ladyzhensky. Read more at Annenberg School for Communication.