Social bots, or automated social media accounts that pose as genuine people, have infiltrated all manner of discussions, including conversations about consequential topics, such as the COVID-19 pandemic. These bots are not like robocalls or spam emails; recent studies have shown that social media users find them mostly indistinguishable from real humans.
Now, a new study by University of Pennsylvania and Stony Brook University researchers, published in Findings of the Association for Computational Linguistics, gives a closer look at how these bots disguise themselves. Through state-of-the-art machine learning and natural language processing techniques, the researchers estimated how well bots mimic 17 human attributes, including age, gender, and a range of emotions.
The study sheds light on how bots behave on social media platforms and interact with genuine accounts, as well as the current capabilities of bot-generation technologies.
It also suggests a new strategy for detecting bots: While the language used by any one bot reflected convincingly human personality traits, their similarity to one another betrayed their artificial nature.
“This research gives us insight into how bots are able to engage with these platforms undetected,” says lead author Salvatore Giorgi, a graduate student in the Department of Computer and Information Science in the School of Engineering and Applied Science. “If a Twitter user thinks an account is human, then they may be more likely to engage with that account. Depending on the bot’s intent, the end result of this interaction could be innocuous, but it could also lead to engaging with potentially dangerous misinformation.”
Read more at Penn Engineering Today.