The virtual assistant

Artificial intelligence has permeated many corners of life, from consumer purchasing and media consumption to health care—sometimes in ways we don’t even know.

lyle looking out a window with green reflection
Across campus, researchers like computer scientist Lyle Ungar (above) are working to better understand how artificial intelligence affects everyday life now and in the future. Wharton professor Kartik Hosanagar, for example, studies the influence of recommendations on media consumption. Sociologist Ross Koppel teaches about biomedical informatics, and Penn Medicine radiologist Suyash Mohan and colleagues are piloting use of AI in radiology.

The doggie raincoat was cute the first time around, modeled by an adorable mutt and found through an intentionally clicked link. But then, like an Internet phantom, the canine outerwear kept showing up, in ads along the right-hand side of an email browser, in Facebook, and in several news articles. A product seen on a website visited once reappeared as if multiplying.

It’s an experience that just about anyone who does anything on the web these days has likely had: Click on a link or visit a website and suddenly, that item follows your electronic path. The strategy is called ad remarketing, and it’s intended to capture the 98% of would-be consumers who view a product but don’t buy it. Often, artificial intelligence technology like machine learning is at play.

Artificial intelligence is, for better or worse, part of 21st-century life. It has permeated consumer purchasing and influences the popularity of streaming media. It affects the routes people drive and can sway health care treatment plans, and even relationships. In its most common form, AI acts as a filter, surreptitiously curating large quantities of data behind the scenes to ease the human decision-making process, whether that’s selecting a song to listen to or a shirt to buy.

“People are aware of AI that looks revolutionary, like self-driving cars, but there are many more subtle uses that are more pervasive,” says Penn computer scientist Lyle Ungar. “As you go through life doing anything—making a credit card purchase, using a dating app, using Google Maps—all of these have huge amounts of AI helping you sort through the hopelessly large choices you have to make.”

More time on Facebook? That helps AI

Media consumption is a prime example. On any given day, nearly three-quarters of Facebook users in the U.S. log onto the platform at least once, spending about 38 minutes scrolling. Sixty-three percent look at Instagram daily, whiling away close to half an hour. Other social media sites like Snapchat, YouTube, and Twitter aren’t far behind.

The platforms keep users engaged by pointing out additional content they might want to view, and they do this successfully by using the massive quantity of data they have on each individual. “Facebook certainly knows my friends, what they like, who they hang out with, which politicians they endorse,” Ungar explains. “And if I’ve made 43,000 Google searches, Google knows all the words I’m browsing, what webpages I’ve visited recently, my location. All of this helps to inform what I might like to click on, and behind all of this is machine learning.”

Machine learning is often what’s meant by the phrase “artificial intelligence” today, but it’s actually a facet of AI. In machine learning, the computer is given a huge number of labeled photographic examples, such as “cancerous” or “healthy,” “dog” or “cat,” and it learns to predict labels for future inputs. Artificial intelligence can also include hand-built rules (such as business rules) or optimization algorithms like those used to determine ideal flight paths for planes.

In general, for AI, greater input leads to greater knowledge. Someone spending 38 minutes a day on Facebook accumulates 13,870 minutes a year on the platform, give or take, equaling almost 10 entire days. That’s a lot of data the computer can use to learn how to generate strong recommendations.

Kartik Hosanagar speaking on stage
Kartik Hosanagar’s work focuses on the digital economy, with a particular interest in how algorithms influence consumer behavior. Several studies he’s conducted found that recommendations hugely influence the choices people make, by anywhere from 15 to 60% depending on the online service.

Research from Wharton marketing professor Kartik Hosanagar has looked at consumption on some of these platforms. “About 70% of the time people spend on YouTube is attributed to recommendations,” he says. “For Netflix, about 80% of the streaming video hours people view is now influenced by recommendations. It’s not 80% of the clicks but 80% of the video hours we view. So, for instance, we might discover a show because Netflix recommends it, then we watch the whole season. Episodes 2, 3, and 4 themselves aren’t influenced by the initial recommendation, but Episode 1 got the whole process started.”

Recommendations drive purchases

Another area where this comes into play often—daily for some people—is online shopping. “Reviews and recommendations are a huge driver of purchases,” Hosanagar explains. And algorithms and AI tend to propel what will pop up when, through two types of filtering methods.

The first, called collaborative filtering, is the “people who bought X also bought Y” approach. “It’s more popular,” Hosanagar says, “partially because you don’t have to understand the products very deeply.” In other words, the prompting results from the historical data of previous purchases. The alternative, called content-based filtering, takes cues from the first product to recommend a second one, “if you like X, you might also like Y,” requiring a much more in-depth knowledge of both products at play.

As you go through life doing anything—making a credit card purchase, using a dating app, using Google Maps—all of these have huge amounts of AI helping you sort through the hopelessly large choices you have to make. Computer scientist Lyle Ungar

Companies, which frequently employ a hybrid of the two methods, constantly tweak the algorithms they use for their recommendation engines because these suggestions matter to their bottom line. Hosanagar recounts the story a former Amazon chief scientist told his Enabling Technologies class in 2005. “When they were trying to roll out recommendations [in the late 1990s], there was a lot of internal debate,” Hosanagar says. “Some people felt this would be a great new technology to help surface relevant products. Others worried you might actually end up hurting sales by showing a consumer who had found a relevant product another option.”

Screen with upper right view of Scandinavia on a Google map
Applications like Google Maps use AI to filter data for users. “Why would I try to pick which route to drive?” Ungar says. “If I’m driving in a place like Paris, Google knows Paris better.”

Amazon ran an A/B test, sending some consumers the recommendations and maintaining the recommendation-free experience for others. Purchases in the group that saw the endorsements increased by nearly 25%. Twenty years later, reports suggest that such recommendations actually increase sales by closer to 35%, and work Hosanagar has done on other online services confirms that suggestions hugely influence choice, by 15 to 60% across different studies.

It’s all part of the data paring-down process that artificial intelligence helps to foster. “I love Amazon, and part of why I love it is because I can’t deal with it,” Ungar says. “Sure, I can read some of the reviews on a product, but I can’t read them all.” Amazon does the work for consumers, sorting chaos into order. “It’s modern AI,” Ungar adds. “It’s interactive, human-based. Humans generate the reviews—even, typically, the fake ones—and it’s me reading those reviews,” but the decision-making process gets influenced by some back-end algorithmic acrobatics.

Diagnostic copilot

In the world of health care, that can mean providing physicians a natural language processing tool that scans countless medical records seeking patterns in a patient population, or help in making a diagnosis. “When a doctor is trying to figure out what’s wrong with you, AI can look at the patient’s full history and offer [a range of differential diagnoses] to help the doctor consider alternatives,” says Ross Koppel, a sociologist in the School of Arts and Sciences who teaches about biomedical informatics. “That’s a really good use of AI.”

For the past three years, that’s been happening in the Radiology Department at the Hospital of the University of Pennsylvania. “We’re training the computer to do what humans do, but better and faster,” says Suyash Mohan, an associate professor of radiology at the Perelman School of Medicine. “When a new patient gets an MRI, the machine extracts all features on the images and feeds out a differential diagnosis.”

From there, it’s up to the practitioner to decide how to proceed, Mohan adds, noting, “The first shot is taken by a trainee, which is a computer, and then you, the physician, can say, ‘I agree or disagree.’ That’s where natural intelligence comes in.” He’s seen it succeed both experimentally and in practice.

In one study Mohan and colleagues conducted, more than 2,400 MRI scans were interpreted by two radiology residents, two neuroradiology fellows, and two neuroradiology attendings, then their performance was compared to an algorithm. The computer performed at a level between the residents and fellows, in other words, better than a relatively new trainee but not as well as a fully trained physician working in the field. Mohan predicts that within the next decade or so, AI will become more a part of daily radiology practice, helping to accurately diagnose conditions based on MRI images.

Aerial view of the Perelman Center for Advanced Medicine
For the past three years, for a select number of cases, Penn Medicine’s Radiology Department has employed artificial intelligence as a starting point to make diagnoses from MRI images. A physician then takes—or leaves—that information to draw a final conclusion, says radiologist Suyash Mohan. “That’s where natural intelligence comes in.”

In 2018, the researchers completed a clinical trial on real-time cases for 200 patients, and have recently begun a clinical trial on precision education in radiology, the idea of using artificial intelligence to better instruct radiology trainees.

There’s plenty of other potential for AI in health care. In India, for example, physicians are starting to use artificial intelligence to look for early signs of an eye condition called diabetic retinopathy, which can lead to blindness in people with diabetes. Beyond that, AI could improve mammogram effectiveness through computer-aided detection of potentially cancerous abnormalities, or offer virtual biopsies in patients with brain tumors.

“Although I’m skeptical about many of the hyper claims about health information technology, I’m hopeful about the utility of AI,” Koppel says. “In fact, I look forward to it. I do caution that it may do dumb things, but basically, I’m pretty optimistic that it will help. It’s going to help doctors think about additional issues.”

Free will and the future of AI

Anything but the most basic uses of artificial intelligence in health care are still a few iterations away. The same is true for large-scale AI in something like the job-hiring process, which hasn’t yet figured out how to eliminate underlying bias, or in smart homes, where a homeowner might ask their fridge to let them know when they need more milk and in the process, inadvertently open up their hard drive and routers to cyberintruders.

A doctor points to a middle panel of a three-panel display of MRI brain scans
Though all but the most basic uses of AI in health care are still in the future, there are many possibilities, from detecting early signs of diabetic retinopathy to improving mammogram effectiveness and offering virtual biopsies for patients with brain tumors.

For applications with AI already in play, like cellphone autocompletion or virtual scheduling assistants, the technology continues to improve. “You can see them gradually getting more and more useful,” Ungar says. He suspects a subtle, almost imperceptible, shift toward AI pervasiveness during the next decade or two. 

“We think that we want to choose things—and we do of course—but all of this will become more humanistic. Am I going to be better at picking movies for myself or is Netflix? It knows way more people’s viewing histories than I do. Why should I think that I can watch some trailer written to sell me something and do a better job than Netflix, which has hundreds of thousands of data points?”

That doesn’t, however, mean the end of free will or human choice. “These algorithms are valuable because they can make decisions at a scale at which no human can,” Hosanagar says. He stresses that people should become more active users, understanding how the algorithms affect their choices and demanding at least a modicum of control. Otherwise, whether they want it to or not, that dog in a raincoat they clicked just once will continue to haunt them, at least until another item takes its place. 

Kartik Hosanagar is the John C. Hower Professor of Technology and Digital Business, a professor of marketing at the Wharton School, and author of “A Human’s Guide to Machine Intelligence: How Algorithms are Shaping our Lives and How We Can Stay in Control.”

Ross Koppel is an adjunct professor in the Department of Sociology in the School of Arts and Sciences. He is also a senior fellow at the Leonard Davis Institute of Health Economics.

Suyash Mohan is an associate professor in the Radiology and the Neurosurgery departments at the Hospital of the University of Pennsylvania.

Lyle Ungar is a professor in the departments of Bioengineering and Computer and Information Science in the School of Engineering and Applied Science, in the Graduate Group in Genomics and Computational Biology in the Perelman School of Medicine, in the Department of Operations, Information, and Decisions in the Wharton School, and in the Department of Psychology in the School of Arts and Sciences.