Penn philosopher sheds new light on robots, artificial intelligence

What role does a philosopher play in building robots? If you’re Lisa Miracchi, an assistant professor in the Department of Philosophy in the School of Arts & Sciences, more than you might think.

When scholars began studying human intelligence, Miracchi says two schools of thought emerged: One group said human beings are simply computers, with mental states and actions explained in computational terms. The other camp believed that intelligence and the ability to think makes humans more than just computers.

There are important similarities between human beings and computers, Miracchi says, but “the story is much more complicated.”

Miracchi studies artificial intelligence—she’s called a theoretical roboticist—and to understand her work requires first grasping what she means by the phrases “mental state” and “computational state.”

The mental states and actions of animals (including humans) typically include two aspects: consciousness and intentionality. As an example, take the act of preparing a cup of coffee. What a person experiences while measuring out grinds, adding water, etc., is consciousness. The intentionality or directedness relates to the person’s goal, in this case, to make the coffee itself.

“All intelligent creatures have goals,” Miracchi says. The robots we can currently build do not. In theory, she says, roboticists could create genuinely intelligent robots, but they must first better understand which factors provide humans with this capacity to think, as well as the complex interactions between the brain, body, and environment.

At its baseline, a computational system is composed of inputs, outputs, and the transition between the pair, like a formula made up of ones and zeros that, when substituted with the appropriate symbols, can answer a particular question. It includes many computational states. Robots are computational systems housed in some type of physical, artificial bodies.

Miracchi’s main work looks at the relationship between the mental and the computational.

“Roboticists often assume some connection between actions that we’re interested in and certain computational states,” she says. “We’re pretty good at building robots once we know what system we want to build. But there’s general agreement that we haven’t been able to build systems that ‘understand’ what’s going on in the world.”

Part of the problem stems from the fact that the field may not be asking the proper questions. Right now, most work focuses on programming robots to represent goals, but Miracchi says representing a goal and having a goal are not the same.

“We need to work more on how the robot interacts with its environment,” she says.

She also says she wants to better understand how intelligent beings—namely, humans—derive from something unthinking like a set of neurons. Neurons unquestionably help us think, but it’s the how that intrigues her. In other words, she is interested in how these “non-mental features” act in conjunction with their surroundings to produce mental features.

“This is where robotics is incredibly useful,” Miracchi says. Researchers can create simple, precise systems they can manipulate easily to get to the heart of how and why intelligent beings depend on non-intelligent, less complex beings.

“We’re at the very early stages of understanding this,” she adds.

That’s the theoretical side of her work. Then there’s the applied side, which includes collaborating with Daniel Koditschek, the Alfred Fitler Moore Professor in the School of Engineering & Applied Science, to incorporate ideas about intelligence into the actual building of robots. Miracchi is giving a talk on the subject, open to the public, as part of an April 1 Penn Engineering colloquium.

Miracchi’s work lives at the intersection of these two worlds, where the abstract meets the functional. Answering some of these questions could mean thinking about intelligence, artificial or otherwise, in a new light.

Lisa Miracchi