Tele-Immersion System Is First 'Network Computer,' with Input, Processing and Output in Different Locations
PHILADELPHIA – When they make their first public demonstration of tele-immersion at this week's Super Computing 2002 conference in Baltimore, computer scientists will also attain another first: a "network computer" that processes data at a location far removed from either input or output.
While the tele-immersion system will gather and display information in side-by-side booths at the Baltimore Convention Center, actual data processing will occur some 250 miles away at the Pittsburgh Supercomputing Center. Previous demonstrations of tele-immersion, a next-generation type of ultra-realistic videoconferencing that draws upon Internet2 and technology similar to that used in 3D movies, have relied upon local computing power at the University of Pennsylvania and other participating institutions.
"Shifting the computing from 10 processors at Penn to 1,240 parallel machines based in Pittsburgh will speed data processing 75-fold, turning tele-immersion into a true real-time technology," said Kostas Daniilidis, an assistant professor of computer and information science at Penn. "It now takes our tele-immersion system roughly 15 seconds to scan, process and display the entire volume of a typical room. With help from the Pittsburgh Supercomputing Center, that time will shrink to 200 milliseconds."
This week's tele-immersion demonstration in Baltimore, presented by scientists from Penn and the University of North Carolina at Chapel Hill, is the first large-scale public display of the technology. Drawing on a bank of cameras that constantly scans participants and their surroundings, tele-immersion allows participants in different states to feel as if they're chatting in the same room. But gathering such comprehensive, real-time measurements of a person and his environment takes a toll: Tele-immersion generates huge amounts of data, requiring massive computing power and bandwidth.
The boost in computing power achieved with the move to the Pittsburgh Supercomputing Center will permit at least one significant advance in tele-immersion's capabilities: For the first time, the system will be able to image an entire room in real time. Previously, limited processing power restricted the gathering of images to a small area where participants were seated, while the background was static, not unlike a television anchor seated before an unchanging image of a city skyline.
"The reassigning of tele-immersion data processing to a faraway supercomputing center is a milestone for grid computing, which uses remote machines to process data," Daniilidis said. "If connections are fast enough – as with Internet2 – the network itself becomes a giant computer, linking processors scattered over many hundreds of miles. This tele-immersion experiment shows definitively that a network computer configured this way can handle extremely data-intensive operations much more quickly than if processing were occurring within the confines of a single room."
All this computing is for a good cause. Daniilidis and his colleagues say tele-immersion may well revolutionize the way people communicate, allowing people on opposite ends of the country or world to feel temporarily as if they're in each other's presence. Key to tele-immersion's realistic feel are a hemispherical bank of digital cameras that capture participants from a variety of angles and tracking gear worn on their heads. Combined with polarized glasses much like those worn at 3D movies, the setup creates subtly different images in each eye, much as our eyes do in daily life.
The tele-immersion collaboration involving Penn, UNC and the Pittsburgh Supercomputing Center is funded by the National Science Foundation.