Skip to ContentSkip to Navigation
About us FEB About us Departments IM&S
Header image Vinci blogs

Do we want AI algorithms to shape public discourse?

Datum:06 juni 2024
Platforms owned by major tech companies.
Platforms owned by major tech companies.

Some say it is vital for large numbers of citizens to participate in public discourse – that all voices need to be heard. But in practice, there are two key problems. First, most citizens don’t want to make the effort and would rather spend their time on work, home, or entertainment. Second, those people who do want to join the public debate are doing so in a strange way on social media, with AI algorithms determining who sees what. Here, we discuss whether these problems are serious and, if so, if we can offer potential solutions. 

In today's digital era, the majority of Europe's public discourse takes place on platforms owned by a handful of major tech companies, such as Meta (the umbrella organization for Facebook, WhatsApp, and Instagram), X Corp (previously Twitter), and ByteDance (the Chinese firm behind TikTok). These platforms are lauded for making public conversations more inclusive and engaging. Yet, there is increasing concern over their role as de facto arbitrators of these discussions. Indeed, due to concerns about personal data harvesting and national security, recently, in the US, a law was passed forcing TikTok to cease operating or sell out to a US investor. The platforms’ control is exerted through algorithms that curate content, selective bans on users, and dominance over the digital ecosystem, which limits the emergence of rival platforms. Furthermore, these corporations have instituted their own regulatory frameworks, potentially as a means to avoid government-led oversight. The broader effects of their hegemony, driven by commercial interests that directly and indirectly impact the public discourse, are not fully understood.

A key concern is the working of the AI algorithms, which push particular messages, videos, and other content to individual users based on their behavior and individual preferences. The algorithms’ main task is to keep users’ attention so that the platforms can earn advertising revenue. As a result, individual users are constantly fed content that is closely aligned with their own opinions, and are kept engaged by being fed shocking or worrisome content. Consequently, the algorithms expose users to ever-more extreme content, acting as "polarization machines". Because they feed most of our voters with information in this way, they may profoundly affect democratic processes and institutions. Hence, unlike traditional media, which are comparatively easier to regulate because everyone sees the same content, social media presents a formidable challenge for policymakers. Without proper safeguards, social media algorithms may be leading to a normalization of extreme viewpoints among the public.

Such polarization makes different groups increasingly unwilling or unable to bridge the gap with other societal groups, especially when discussing contentious topics with those holding divergent views. What is the role of social media and AI algorithms in this worrying trend? It would certainly be valuable if the platforms would “feed the middle ground” and to show how different groups’ standpoints and underlying values are often closer to each other than they may appear. Something that AI algorithms have failed to develop is a way to spark conversations among users from varied social and ideological backgrounds, perhaps a ‘political-bridge Tinder' app, underscoring the need to find commonality in times of increased divisiveness. In other words, we would need a personal Jordan Klepper, host of documentary-style fieldwork, to delve deeper into why people feel strongly about certain issues and find points in common across polarized views. And as Jordan Klepper subtly finds points in common between political parties, an ideal algorithm would need to reset itself from time to time and offer content potentially bridging the divide, rather than isolating groups from one another, providing users with the tools to escape their filter bubbles.

Although solving the issue of polarization of opinions is not easy, as it is linked to the emotional engagement of social media users, misinformation is not the biggest problem in today's society. Previous studies show that the main problem is people's tendency to remain uninformed rather than being misinformed. In fact, while misinformation affects people who are already inclined to believe certain narratives by leveraging their rigid beliefs, individuals’ decision to remain uninformed is due to their lack of trust in legitimate sources that present what they see as an ungrounded truth. This issue can be partly resolved by increasing digital literacy and the public's ability to evaluate information sources critically. In this regard, institutions have the power to regulate the digital public sphere, balancing the conflict between censorship and the right to free speech, along with the potential role of education and technological solutions in building digital trust. The European Commission is implementing regulations and legislations to align social media platforms with societal values of openness, transparency, authenticity, trust, and self-determination for citizens regarding their digital data. However, big tech firms are lobbying hard to maintain control over their own algorithms and business models.

The lobby of big tech firms is indeed a major issue. Given that their main objective is to earn a profit, an important implication for today’s society would mean transforming big tech social media apps into socially beneficial platforms. As such, they would need to simultaneously focus on the social objective of enhancing public good while being financially sustainable. This is what led OpenAI, the firm behind the widely used AI chat application Chat-GPT, to be founded as a non-profit organization in the first place. However, when the oversight board attempted to enforce social safeguards and limit OpenAI’s ability to release potentially dangerous AI functionality, members of the oversight board were fired, and commercial interests prevailed. To make sure that social media platforms are used responsibly and ethically, there have been suggestions for stronger rules and regulations, as well as a push for public organizations to promote ethical behavior online. After all, we need to keep in mind that social media affects our mental health and not just our opinions. Recently, the leaders of the biggest social media platforms were forced to apologize to parents after children committed suicide due to their exposure to harmful content on those platforms. Unfortunately, meaningful changes are still lacking, and adequate socially-driven regulations still need to be implemented.

Author: Cleo Silvestri - c.silvestri@rug.nl
David Langley - d.j.langley@rug.nl