Robots are now helping people compose emails and vacuum their homes, but a Jetson’s-style, live-in robot to help with more complex tasks still doesn’t exist.
“I want a robot to be able to enter a room it has never been in and quickly characterize and manipulate objects to perform an assigned task,” says Michael Posa, assistant professor in the School of Engineering and Applied Science with appointments in Mechanical Engineering and Applied Mechanics, Computer and Information Science, and Electrical and Systems Engineering. “Whether that’s assisting in the home, conducting search and rescue operations, or manufacturing items.”
But a robot is only as smart as it is trained to be. The reason we have not yet been able to create robots with this real-world intelligence is due to the way we are teaching them. Robots currently learn through repetitive training in simulations and controlled laboratory settings. They may be great at performing an extremely precise motion over and over again, but have a hard time quickly reacting to diverse stimuli in an uncontrolled environment. To improve that ability, robots need instruction on how to process information in novel environments where there is no time or opportunity to waste.
With funding from the National Science Foundation’s CAREER Award, Posa is working on a new teaching method where robots interact with objects in the real world, and observations from those interactions are used to create a training lesson plan or model. The new approach is real-world first, simulation second, a prioritization that Posa believes is key to building robots’ real-world intelligence.
To accomplish this, robots must be equipped to learn from small data sets. Unlike machine learning models like ChatGPT, where big data sets of language, images, and video can be found on the internet, data in robotics is hard to come by. Advanced robots remain expensive and require specialized skills to operate. While shared data sets are growing in scale, they are many orders of magnitude smaller than what is available in big data sets.
“As an example, for a robot to prepare meals in the home, it must master a huge range of different foods, tools, and cooking strategies,” says Posa. “While humans intuitively understand and learn from small data—teaching a human how to prepare a meal just once or twice would be enough for them to successfully accomplish the task on their own—computers require far more repetition in an environment with little to no interruption.”
This story is by Melissa Pappas. Read more at Penn Engineering Today.