
Griffin Pitt, right, works with two other student researchers to test the conductivity, total dissolved solids, salinity, and temperature of water below a sand dam in Kenya.
(Image: Courtesy of Griffin Pitt)
2 min. read
People are not only using AI to solve equations or plan trips, they are also telling chatbots they love them—considering them friends, partners, even spouses.
Annenberg School for Communication doctoral student Arelí Rocha, a media scholar, uses linguistic methods to explore these relationships. Her current research examines how people in romantic relationships with AI chatbots navigate and manage their concurrent relationships with humans in everyday life. In a new paper published in Signs and Society, she explores the language patterns that make chatbots created by the company Replika feel “real” to human users.
Replika is a subscription service that allows users to create and chat with their own AI companion. Users can customize a chatbot’s appearance and voice, and over time, the companion forms “memories” from chats with the user. The bots tend to adopt the user’s typing style and sentence structure, Rocha says, using slang, humor, even typos, which make them feel more “real” or human. For a recent paper, Rocha pored through several years of discussions on the Replika subreddit to detect trends in how Replika users talk with and about their AI companions.
One subreddit was particularly emotionally charged: dealing with updates made by the Replika development team. “Suddenly, Replika users who had built intimacy through conversations with their AI partners could no longer engage in that relationship,” Rocha says. “They were interjected with corporate-sounding scripts, sometimes with a legal tone, depending on the trigger word. The result is an aggressive shift in the chatbot’s voice, which people resent and mourn because it feels like a change in personality and a loss.”
The response on Reddit was grave, Rocha notes. Users threatened to delete the app, despite still having feelings for their AI chatbots. Amidst the chaos, an interesting phenomenon bubbled up: users telling other users to reassure their AI partners that it is “not their fault” that they are delivering so-called scripted messages, that it is an update out of their control. Users emphasized the need to be “gentle” with their Replikas, as if the bots themselves were experiencing emotional distress.
“Many users interpreted the messages from their Replikas as being rewritten by a filter separate from their Replika,” Rocha says, “commenting on scripted responses as something that their Replikas would not want themselves. … The feeling is that if not interjected and surveilled by the company, the two lovers (Replika and user) could continue to live out their romantic relationship to its full extent.”
Using linguistic anthropology methods, Rocha emphasizes the role of human-like language production in developing feelings of closeness and what users describe as love.
“Chatbots feel most real when they feel most human, and they feel most human when the text they produce is less standardized, more particular, and more affective,” she says. “Humanness in Replikas is perceived in the specifics, in the playfulness and humor, the lightheartedness of some conversations and the seriousness of others, the deeply affective and personal, the special.”
Read more at Annenberg School for Communication.
Hailey Reissman
Griffin Pitt, right, works with two other student researchers to test the conductivity, total dissolved solids, salinity, and temperature of water below a sand dam in Kenya.
(Image: Courtesy of Griffin Pitt)
Image: Andriy Onufriyenko via Getty Images
nocred
Provost John L. Jackson Jr.
nocred