Social media bots may appear human, but their similar personalities give them away

Social bots, or automated social media accounts that pose as genuine people, have infiltrated all manner of discussions, including conversations about consequential topics, such as the COVID-19 pandemic. These bots are not like robocalls or spam emails; recent studies have shown that social media users find them mostly indistinguishable from real humans.

Five graphs showing the number of accounts and the degree of engagement, from older vs. younger to more enthusiasm to less enthusiasm, more or less agreeable, more or less positive.
Distribution of human (blue) and bot (red) accounts across age, personality, and sentiment. Across each trait, the human accounts have a large spread of values, whereas the bot accounts are all clustered within a small range. For enthusiasm, agreeableness, and negativity the bots are clustered around the center of the human distribution, showing that these accounts exhibit very average traits. (Image: Penn Engineering Today)

Now, a new study by University of Pennsylvania and Stony Brook University researchers, published in Findings of the Association for Computational Linguistics, gives a closer look at how these bots disguise themselves. Through state-of-the-art machine learning and natural language processing techniques, the researchers estimated how well bots mimic 17 human attributes, including age, gender, and a range of emotions.

The study sheds light on how bots behave on social media platforms and interact with genuine accounts, as well as the current capabilities of bot-generation technologies.

It also suggests a new strategy for detecting bots: While the language used by any one bot reflected convincingly human personality traits, their similarity to one another betrayed their artificial nature.

“This research gives us insight into how bots are able to engage with these platforms undetected,” says lead author Salvatore Giorgi, a graduate student in the Department of Computer and Information Science in the School of Engineering and Applied Science. “If a Twitter user thinks an account is human, then they may be more likely to engage with that account. Depending on the bot’s intent, the end result of this interaction could be innocuous, but it could also lead to engaging with potentially dangerous misinformation.”

Read more at Penn Engineering Today.