Addressing bias in AI

In Policy Lab: AI and Implicit Bias, Penn Carey Law students propose solutions to address intersectional bias in generative AI.

The rapid expansion of artificial intelligence and the overwhelming popularity of generative AI tools like ChatGPT raise important questions about how algorithms and machine learning models reproduce real-world human biases. The University of Pennsylvania Carey Law School’s Policy Lab: AI and Implicit Bias incubates ideas for an intersectional, inclusive approach to artificial intelligence.

Graphic design profile of a human.
mage: Sylverarts for Adobe Stock

Taught by Rangita de Silva de Alwis, senior adjunct professor of global leadership, the course engages students in rigorous academic analysis and rich discussions with lawyers, researchers, designers, and international leaders in technology to examine the impact of intersectional bias in generative AI.

The Spring 2023 Policy Lab’s report, “A Promethean Moment: Towards and Understanding of Generative AI and Its Implications on Bias,” explores how bias arises in generative AI applications, demonstrates methods for measuring AI bias and offers solutions for mitigating the impact of technology that reproduces human bias and exacerbates inequity.

The spring course featured an array of global experts. “These interdisciplinary experts represented a vast array of expertise in AI, including academics, industry practitioners, founders, and investors,” says Shashank Sirivolu, research assistant for the AI Policy Lab. “The advent of generative AI presented the class with an opportunity to engage in groundbreaking work to address the challenges and limitations of new technologies.”

In “A Promethean Moment,” student-authored essays call attention to a range of urgent issues, many of which are already the subject of intense public debate. The report explores social media consent disclosures and the potential exploitation of content creators, racial and gender disparity in AI-generated art and implications for copyright law, big data and cross-border data exploitation, the pitfalls of a self-regulated AI industry, and the risks of large language models trained on biased data sets.

“This report is a valuable primer for an academic, lawyer, or technologist interested in understanding generative AI,” Sirivolu says. “The report explores the legal, ethical, and social implications of bias in generative AI, including how bias arises, how it can be measured, and how it can be mitigated.”

Read more at Penn Carey Law.