As a neuroscientist surveying the landscape of generative AI—artificial intelligence capable of generating text, images, or other media—Konrad Kording cites two potential directions forward: One is the “weird future” of political use and manipulation, and the other is the “power tool direction,” where people use ChatGPT to get information as they would use a drill to build furniture.
“I’m not sure which of those two directions we’re going but I think a lot of the AI people are working to move us into the power tool direction,” says Kording, a Penn Integrates Knowledge (PIK) University professor with appointments in the Perelman School of Medicine and School of Engineering and Applied Science. Reflecting on how generative AI is shifting the paradigm of science as a discipline, Kording said he thinks “it will push science as a whole into a much more collaborative direction,” though he has concerns about ChatGPT’s blind spots.
Kording joined three University of Pennsylvania researchers from the chemistry, political science, and psychology departments sharing their perspectives in the recent panel “ChatGPT turns one: How is generative AI reshaping science?” PIK Professor René Vidal opened the event, which was hosted by the School of Arts & Sciences’ Data Driven Discovery Initiative (DDDI), and Bhuvnesh Jain, physics and astronomy professor and co-faculty director of DDDI, moderated the discussion.
“Generative AI is moving so rapidly that even if it’s a snapshot, it will be very interesting for all of us to get that snapshot from these wonderful experts,” Jain said. OpenAI launched ChatGPT, a large language model (LLM)-based chatbot, on Nov. 30, 2022, and it rapidly ascended to ubiquity in news reports, faculty discussions, and research papers. Colin Twomey, interim executive director of DDDI, told Penn Today that it’s an open question as to how it will change the landscape of scientific research, and the` idea of the event was to solicit colleagues’ opinions on interesting directions in their fields.
In honor of what he called “ChatGPT’s birthday party,” assistant professor of chemistry Andrew Zahrt asked the chatbot about its use in his field and got the response, “Generative AI in chemistry serves as a creative digital assistant, helping scientists design new molecules and materials by predicting their properties and suggesting innovative combinations, ultimately accelerating the drug discovery and materials development processes.” Zahrt called that “a really good description.”
He said when it comes to generative AI, the chemistry field is still in the proof of principle stage, where a lot of research is asking whether scientists can use generative models to propose reasonable chemical structures.
“To actually synthesize that molecule and verify that it does what you want it to do is not trivial, and a lot of times the population of people who can do generative modeling are not the same population of people who can go in a lab, make a molecule, and test it,” Zahrt said. “There’s many studies that propose new molecules that should be better for something, but the number of studies that actually validate those claims experimentally is pretty small.”
Nick Pangakis, a Ph.D. student in political science whose research focuses on integrating AI tools into the social sciences, said those disciplines are also at the proof-of-concept stage. He said an area that has been successful in this is using generative AI to elicit opinions in online surveys.
He noted that in recent peer-reviewed papers in the social sciences, researchers have used generative AI not only as the survey-takers but also subjects—meaning they assigned AI certain characteristics, asked it survey questions, and compared responses to those humans gave in real-world survey questions. Pangakis said the research found that generative AI “can approximate reasonably well what the real-world survey questions are reporting.”
“The implication of this that I think we will see is that corporations will say, ‘We don’t need to talk to consumers; let’s talk to generative AI, or maybe politicians will say, ‘We don’t need to interview voters; let’s talk to generative AI,’” Pangakis said. “That probably should be raising red flags,” he said. “Do they really represent beliefs, or are they just modeling probabilistic algorithms that predict what the next word is?”
The panelists spoke to an engaged room of students, staff, and other members of the Penn community gathered at the Perelman Center for Political Science and Economics. One attendee asked about synthetic data generated by AI repeatedly replacing human participants.
Sudeep Bhatia, associate professor of psychology, addressed that challenge and used the example of utilizing ChatGPT to write recommendation letters. He said these models “are trained on vast amounts of language data that’s been posted on the internet over the past 40 years.” Noting that if people stop writing recommendation letters without ChatGPT, “ultimately, there will stop being human data for these models to continue to be trained on. That’s the biggest vulnerability that these models have moving forward.”
As for use in the psychology field, Bhatia commented, “For a psychologist to say we should stop studying humans and study GPT as a proxy for humans is frankly absurd.” But he cited other useful applications in the field, such as utilizing LLMs to understand how people learn or use languages and integrating ChatGPT into research on high-level human cognition.
Many innovations, Zahrt said, “are driven by our ability to come up with ideas, and a lot of the merit of those ideas is fairly subjective,” but AI could help. Jain concluded, “From my point of view, AI tools can really shorten the path from individual curiosity to an actual research exploration to a discovery, and that’s quite wonderful.”