AI could transform social science research

Penn Integrates Knowledge University Professor Philip Tetlock and researchers from the University of Waterloo, University of Toronto, and Yale, discuss AI and its application to their work.

Team of Computer Engineers Work on Machine Learning Neural Network Technology Development
Image: iStock/gorodenkoff

In an article recently published in the journal Science, researchers from the University of Pennsylvania, University of Waterloo, University of Toronto, and Yale University looked at how artificial intelligence, particularly large language models (LLMs), could change the nature of their work.
 
“LLMs might supplant human participants for data collection,” says Penn Integrates Knowledge University Professor Philip Tetlock, coauthor of the article.
 
“In fact, LLMs have already demonstrated their ability to generate realistic survey responses concerning consumer behavior. LLMs will revolutionize human-based forecasting in the next three years and it won’t make sense for humans unassisted by AIs to venture probabilistic judgments in serious policy debates. I put a 90% chance on that. Of course, how humans react to all of that is another matter.”
 
Traditionally, social sciences rely on a range of methods, including questionnaires, behavioral tests, observational studies, and experiments. A common goal in social science research is to obtain a generalized representation of characteristics of individuals, groups, cultures, and their dynamics. With the advent of advanced AI systems, the landscape of data collection in social sciences may shift.
 
“What we wanted to explore in this article is how social science research practices can be adapted, even reinvented, to harness the power of AI,” says Igor Grossmann, first author and associate professor of psychology at the University of Waterloo.
 
Tetlock and colleagues note that using artificial intelligence models, particularly large language models trained on vast amounts of text data, are increasingly capable of simulating human-like responses and behaviours. This offers novel opportunities for testing theories and hypotheses about human behaviour at great scale and speed.
 
“AI models can represent a vast array of human experiences and perspectives, possibly giving them a higher degree of freedom to generate diverse responses than conventional human participant methods, which can help to reduce generalizability concerns in research,” says Grossmann.
 
While opinions on the feasibility of this application of advanced AI systems vary, studies using simulated participants could be used to generate novel hypotheses that could then be confirmed in human populations.
 
But the researchers warn of some of the possible pitfalls in this approach—including the fact that LLMs are often trained to exclude socio-cultural biases that exist for real-life humans. This means that sociologists using AI in this way couldn’t study those biases.
 
The researchers note that researchers will need to establish guidelines for the governance of LLMs in research.
 
“Pragmatic concerns with data quality, fairness, and equity of access to the powerful AI systems will be substantial,” says Dawn Parker, a co-author on the article and professor at the University of Waterloo. “So, we must ensure that social science LLMs, like all scientific models, are open-source, meaning that their algorithms and ideally data are available to all to scrutinize, test, and modify. Only by maintaining transparency and replicability can we ensure that AI-assisted social science research truly contributes to our understanding of human experience.”
 
Philip Tetlock is the Leonore Annenberg Penn Integrates Knowledge University Professor of Democracy and Citizenship with appointments in the School of Arts & Sciences and the Wharton School.
 
The research was supported by the Social Sciences and Humanities Research Council of Canada (611-2020-0190 and 435-2014-0685) and the John Templeton Foundation (62260).