
nocred
2 min. read
At a time when generative AI tools like ChatGPT are reshaping communication, learning, and creativity, a summer workshop hosted at the Franklin Institute is helping high schoolers do more than just use AI—they are learning how to build it. Led by Luis Morales-Navarro, a doctoral student in the Graduate School of Education’s Learning Sciences and Technologies program, the “babyGPTs” workshop invites teenagers to construct small generative language models from scratch, empowering them to engage critically with the design, ethics, and limitations of artificial intelligence.
Involving young people in creating AI systems is a focus of the work of Yasmin Kafai, the Lori and Michael Milken President’s Distinguished Professor at Penn GSE and Morales-Navarro’s doctoral advisor. Earlier this year, Kafai led the CreateAI Workshop on campus, gathering together experts from industry, academe, and education to encourage K–12 students and their teachers to be active creators of AI technologies not just consumers of them. These latest Franklin Institute workshops, co-designed by Morales-Navarro under the guidance of Kafai and in partnership with Danaé Metaxa from Penn’s School of Engineering and Applied Science, is an example of that work.
Rather than treating students as passive recipients of information, Morales-Navarro and his team employ a participatory design approach for their workshops: “We don’t think of this as a traditional classroom,” he says. “The workshop is structured so that teens are treated as protagonists in the design process. They decide what kind of model to build, what data to use, and what ethical questions to consider.”
Over the course of five days, participants created their babyGPTs using the nanoGPT framework—small-scale generative language models trained on 75,000 to 300,000 tokens of hand-curated data. (For comparison, GPT3.5, which created ChatGPT, was trained on hundreds of billions of tokens.) Students formed teams, sourced data from movie scripts or recipes, and submitted training jobs to generate models with outputs tailored to their chosen themes. As one student describes, “It was really funny reading what came out. It looked good at first, but when you zoom in, you’re like—'What is this?’”
Facilitators encourage reflection at every step, and students are asked to consider, “Is it okay to train models with data you didn’t create?” and “What happens if your model gives wrong or biased information?”
“A lot of students came in with fears or biases about AI, mostly from what they’ve heard online or from teachers,” says Carly Netting, manager of youth programs at the Franklin Institute. “Being able to build these models and see how they work demystifies the technology and helps them understand that AI is a human-designed system.”
Read more at Penn GSE News.
From Penn GSE
nocred
Image: fcafotodigital via Getty Images
Image: Mininyx Doodle via Getty Images
Charles Kane, Christopher H. Browne Distinguished Professor of Physics at Penn’s School of Arts & Sciences.
(Image: Brooke Sietinsons)