With Siri and Alexa able to tell jokes, curate shopping lists, and help schoolchildren with their homework, the question of what distinguishes a human mind from a machine has taken new shape. This philosophical quandary is one that computer scientists have contemplated for more than three-quarters of a century.
It’s a query that has also been on the mind of historian of science Stephanie Dick since her graduate student days. To understand how computer scientists and others might answer it, Dick, an assistant professor in Penn’s Department of History and Sociology of Science in the School of Arts and Sciences, turns to the code they wrote, the computers they designed, and the problems they tasked these machines with solving.
With a focus on the history of computing and mathematics, particularly post-World War II, Dick has written about everything from the failures of Microsoft Windows to the earliest whispers of artificial intelligence and automated facial recognition. A common current among her inquiries is to ask how humans have theorized human faculties like intelligence and reason and how they translated those theories into the workings of computers.
“I care about the epistemological questions,” Dick says. “I want to know how we know with the machine, what we know with it, what it knows—if anything, and how our knowledge is different for working within the confines of what computers can and cannot do. To do that, there’s no way around getting into the code. That’s what I think is the most important, to see how computer scientists have translated and transformed different problems and questions and ideas into code and what is gained and lost in the process.”
An unexpected path
The fact that Dick is a tenure-track faculty member studying humanities at an Ivy League institution comes as a bit of a surprise even to her. “I got a bit sidetracked,” she says, laughing, citing her original plans to study law and get into politics. Dick was raised in Calgary by parents who had grown up on farms in northern Alberta. “I had never known anyone who got a Ph.D. before,” she says.
But early on she recognized her own interest in rigorous intellectual challenges. She competed in debate at both the national and international levels in high school and figured a natural path would send her toward law school. But at the liberal arts college she attended, University of Kings College, in Halifax, Nova Scotia, first-year students participate in a great books program. She realized she felt most satisfied when immersed in the history of ideas.
“I was really drawn intellectually to that project,” she says, “and especially to the way, in particular moments in history, people came to think really differently about the natural world.”
Kings College had a program in the history of science and technology, and so alongside rigorous studies in math and a second major in philosophy, she took up that major and didn’t look back. “I think my whole academic career has been triangulating between those three different fields in various ways,” she says.
She went on to earn a master’s degree from the University of Toronto and doctoral degree from Harvard University. Her Ph.D. advisor was Peter Galison, a physicist and historian of physics who took a different historical approach than most.
“He was one of the people in the history of science who was advocating for what we might call a turn to practice,” says Dick. “Historians of science had been writing histories of knowledge, writing histories of what scientists know and how they claimed to know it. Peter’s position was always that you can’t understand what people know or figure out how knowledge is produced without looking at what scientists actually do.”
With her dissertation at Harvard, Dick took this tactic as her guide, considering how the introduction of computers shifted the practice of mathematics. Specifically, she examined how the logic that forms the basis of mathematical proofs was translated into computer operations. She did so not just by speaking with and reading the writings of computer scientists and programmers but by delving into the code itself.
“I became really interested,” says Dick, “in what happens when you take an algorithm or an abstract model and then actually make it run on a machine, implement it, especially in the early decades when computers had mere kilobytes of data; it’s astonishing that they could do much of anything at all. I wanted to know how abstract ideas were translated in order to accommodate the material constraints of the computer.”
During her doctoral studies, Dick learned a lot about 1950s programming languages, took her first formal computer science courses, and occasionally sought technical assistance from her historical sources.
“With some of my still-living historical actors, I would both be interviewing them for hours and hours about their careers but also asking for their help to decipher what a bit of code was doing,” she says.
Minds and machines
Her dissertation evolved into what will become Dick’s first book, tentatively titled “Making Up Minds: Computing, AI, and Proof in the Postwar United States.” The work is a study of the branch of artificial intelligence aimed at automating mathematical theorem-proving during the second half of the 20th century at academic, industrial, and defense research laboratories across the country.
“The early automation of proof involved two tandem projects,” explains Dick. “Researchers had to decide what they thought mathematical proofs should look like, and they had to decide what cognitive faculties in people could be automated in order to prove them. Both parts of the project were controversial.”
Broken into three parts, “Reasoning,” “Knowledge,” and “Understanding,” Dick’s book delves into the disagreements that researchers had with one another about how people produce mathematical knowledge, including whether reason is key to mathematical intelligence, if mathematical knowledge can be captured in a database and automated, and how to allow room in automated processes for human understanding.
“I coin the term ‘computational humanism’ to refer to technical practitioners who developed humanist critiques born out of intimacy with the machine,” she says, “rather than at a distance from it, as so many humanist critiques of AI have been.”
Dick’s work on her book progressed apace after she was selected as a Junior Fellow in Harvard’s Society of Fellows. Only 10 to 12 are chosen each year, from any field, and are supported for three years of postdoctoral scholarship. She used that time not only to make progress on her book but also to engage with the other Fellows, whose expertise varied widely.
“Interdisciplinarity is such a buzzword these days,” she says, “but I think it’s actually really hard to get enough shared terms that you can do a meaningful scholarship across disciplines. Yet with the Junior Fellow, even though our contact was mostly informal, it changed my work a great deal to be in conversation for three years with physicists and economists and political scientists just getting to learn, through friendship and through conversation, what people in other disciplines care about.”
That exchange, she contends, made her a better scholar. “It helps you recognize all the things you take for granted.”
A historic lens on AI
Launching from Harvard into her faculty position at Penn in 2017, she brought that perspective to bear in her research, as well as her teaching.
In one course this semester, Data and Death, she presents students a variety of case studies in which computing and mathematics—seemingly the most impartial of subjects—can in fact reproduce and entrench bias and inequality. She gives the example of an algorithm called COMPAS that attempts to predict recidivism and is used by criminal justice systems to make decisions about bail, sentencing, and parole. According to a recent ProPublica investigation, it produces scores that give higher rates of false positives for defendants who are African American and higher rates of false negatives for white defendants.
COMPAS “claimed it had no racial bias because it had equal predictive accuracy for white defendants and black defendants and Latino defendants,” she says, even as false-positive and false-negative results were lopsided based on race. “Machine learning theorists then discovered that you can’t actually optimize machine learning systems for multiple fairness constraints at the same time. If we want to use tools like this, will we have to translate our definitions of ‘fairness’ or ‘equality’ or ‘ethics’ more broadly into the formal terms and constraints of machine learning systems. In many cases, we should not do so.”
In a second book, Dick plans to explore what happened when computer technology was introduced to domestic policing, examining the implications of using computerized databasing, algorithms as well as automated recognition of faces, fingerprints, and other identifying details in the 1960s and ‘70s. And while she notes that universities are devoting significant resources to confronting the pervasive ethical and practical challenges that arise around AI, she is concerned that the technology is taking hold faster than those concerns can be fully considered.
“I’m worried,” she says. “We make our machines, but then they constrain and shape and intervene in the course of our development, we accommodate them. We are always accommodating our technology. Everyone is asking, ‘What can we get computers to do?’ But we must also always ask, ‘Who will we become in tandem?’ and ‘Who do we want to be?’
“Kids are learning to calibrate their speech to accommodate Alexa, the justice system is being calibrated to accommodate the limitations of machine learning systems, laborers are learning to accommodate algorithmic management, scientists are calibrating their questions and answers to the methods of ‘big data.’ When we engineer our technology, we are engineering our future selves at the same time.”
It’s with her historical, humanist perspective that Dick hopes to stimulate conversations about the ethical and epistemological consequences of turning over decision-making to our machines. “Happily, there are very good, thoughtful people working on this, including some here at Penn,” she says. “I am also fortunate to be working with a rich, and diverse international and interdisciplinary community that seeks to bring historical perspectives on AI to bear on our present-day concerns. I keep coming back to the fact that we all need to be held accountable to people in different fields and to value systems other than our own.”