Ensuring the safety and security of AI-controlled systems

Neeraj Gandhi, a doctoral candidate in computer and information science, has developed new approaches to address challenges in security and safety for modern cyber-physical systems.

When an Uber car picking up passengers is a robot, passengers want assurance that the ride is going to be affordable, efficient, smooth—and safe.

That’s where researchers like Neeraj Gandhi, a doctoral candidate in Computer and Information Science (CIS) and a scholar at the Penn Research in Embedded Computing and Integrated Systems Engineering (PRECISE) Center, come in.

From left: Neeraj Gandhi; Mingmin Zhao; Linh Thi Xuan Phan; Oleg Sokolsky, Research Professor in CIS; and Insup Lee.
From left: Neeraj Gandhi; Mingmin Zhao, assistant professor in computer and information science (CIS); Linh Thi Xuan Phan, associate professor in CIS and Gandhi’s advisor; Oleg Sokolsky, Research Professor in CIS; and Insup Lee, Caitlin Fitler Moore Professor in CIS and director of the PRECISE Center. (Image: Courtesy of Penn Engineering)

Gandhi focuses on improving the safety and security of networks of computers that collaborate to control physical devices, such as self-driving cars. He also looks at these processes in multirotor aerial drones, which can be used for agriculture, mining, mapping, surveillance and intelligence, and system security for tasks performed by multiple robots.

Gandhi first became interested in cyber-physical systems (CPS) in high school, where he worked on building a simple robotic arm controlled by muscle electrical signals. He’s also researched extraterrestrial mining robots, body sensor networks for dementia agitation detection and prediction, and photoacoustic imaging for improved surgical guidance in traditional and robotic surgery.

With research spanning a range of systems and locales, including vehicles, factories, and robots, Gandhi’s goal is for real-world practitioners to use his research to make their AI processes safer and more tolerant of the faults that will inevitably occur. “It is practically impossible to prevent systems from undergoing any fault, so the most we can do is ensure that when faults do occur, they are guaranteed to be handled safety,” Gandhi says.

Gandhi’s most recent research project helps AI systems recover from a fault using fewer computing resources, which keeps the system safe but also allows it to run longer and more efficiently.

“This research enables lightweight security in systems like cars, which currently operate over unsecured communication mediums,” Gandhi says. “Designers concerned about the resource demand of techniques that were developed for distributed computing systems employed in data centers now have the option to use this less resource-intensive approach, while still being able to provide guaranteed protection against a broad set of benign and adversarial faults.”

This story is by Liz Wai-Ping Ng. Read more at Penn Engineering.