Content moderation is a delicate balancing act for social media platforms trying to grow their user base. Larger platforms such as Facebook and Twitter, which make most of their profits from advertising, can’t afford to lose eyeballs or engagement on their sites. Yet they are under tremendous public and political pressure to stop disinformation and remove harmful content. Meanwhile, smaller platforms that cater to particular ideologies would rather let free speech reign.
In a forthcoming paper, “Implications of Revenue Models and Technology for Content Moderation Strategies,” Wharton School marketing professors Pinar Yildirim and Z. John Zhang, and Wharton doctoral candidate Yi Liu show how a social media firm’s content moderation strategy is influenced mostly by its revenue model. The team takes into account the considerable user heterogeneity and different revenue models that platforms may have, and derive the platform’s optimal content moderation strategy that maximizes revenue.
When different social media platforms moderate content, the most significant determinant is their bottom line. This bottom line may rely heavily on advertising, or delivering eyeballs to advertisers, or the subscription fees that individual consumers are paying. But there is a stark contrast between the two revenue models.
While advertising relies on delivering many, many eyeballs to advertisers, subscription revenues depend on being able to attract paying customers. As a result of the contrast, the content moderation policy in an effort to retain consumers also looks different under advertising vs. subscription. Social media platforms running on advertising revenue are more likely to conduct content moderation but with lax community standards in order to retain a larger group of consumers, compared to platforms with subscription revenue.
The findings from the paper overall cast doubt on whether social media platforms will always remedy the technological deficiencies on their own.
Read more at Knowledge@Wharton.