Project

The impact of content moderation on the ideological diversity of social media

Code
bof/baf/4y/2024/01/1123
Duration
01 January 2024 → 31 December 2025
Funding
Regional and community funding: Special Research Fund
Promotor
Research disciplines
  • Natural sciences
    • Data mining
    • Machine learning and decision making
    • Natural language processing
    • Artificial intelligence not elsewhere classified
  • Social sciences
    • Digital media
Keywords
Artificial Intelligence Public discourse Content moderation Social media
 
Project description

Social media commonly use content moderation to maintain a safe and respectful environment, comply with legal regulations, and maintain the chosen brand identify. For example, it is commonly used to suppress content considered offensive for the target audience, and to ensure compliance with regulations such as the European Digital Services Act.

When applied by the most popular social media, such as Meta's Facebook and Instagram, TikTok, and X, content moderation has the potential of skewing the public debate on timely topics, including on national and international politics and other impactful societal debates. This raises concerns about the impact these large technology companies and their regulators have on public discourse and opinion. The capabilities of generative AI technologies are likely to further amplify these concerns.

In this project, we will study social media and generative AI systems to better understand the extent to which content moderation, including the moderation of AI-generated content, affects public discourse.