E.V. (Emillie Victoria) de Keulenaar, MA

Norm and technique: speech moderation across contested public spheres
Speech norms join a long history of legal, social and political measures to prevent the normalisation of problematic histories by moderating public language. Typically, such norms are implemented in the form of social codes, laws and social movements to change, sanction or trivialise expressions that are seen as facilitating the circulation, and thereby the normalisation, of problematic ideas. These norms have long been implemented onto a range of media types, including news papers, television, radio, the Web and social media platforms, traditionally in the form of blasphemy, hate speech and discrimination laws, audio and visual obfuscation techniques (bleeps, black rectangles), educative measures to contradict and moderate extreme ideologies, and more recently as online content moderation techniques such as deplatforming, demotion and flagging.
To date, the ways in which such measures are operationalised online are primarily studied as legal and institutional matters, particularly challenges to maintaining (or restricting) free speech, or the implications of content moderation on platforms as political institutions. This project wishes instead to understand the role of content moderation in the formation of platforms as public and counter-spheres. This means examining how contingent and essentially agonistic speech norms – legal, public and other definitions of what counts as objectionable language – cause the boundaries of these spheres to shift and rupture, altering the flows of more-to-less objectionable language towards a diverse ecology of moderated and less moderated counter-spheres. These broad processes bear major implications in the overall health of public debate, which this project argues is less relative to the extent that it suppresses its extremes, than to the degree to which its members are able to (critically) consent on their objectionability. Arguably, this may be done with a well-moderated public sphere, with a flexible margin of tolerance for critical debates around what should and should not count as unacceptable speech.
This project is broken down in three main research questions:
- RQ1: Which content moderation policies against misinformation and hate speech have Twitter and YouTube developed between 2010 and 2021, and how have they been operationalised as content moderation techniques like deplatforming, demotion and flagging?
- RQ2: To what extent do more or less moderated platforms (Twitter, Reddit, YouTube versus Bitchute, Parler and Gab) counter or complement each other's speech restrictions, and how do their differing speech affordances affect the circulation of prohibited misinformation and hate speech?
- RQ3: Which other forms of speech moderation can Twitter and YouTube content moderation infrastructures contemplate as alternatives to outright prohibition, as e.g. consensus-building and context-based dialogue affordances?
Answering these questions calls for a combination of online public sphere theory (Weibel and Latour, 2005; Marres, 2016), digital methods (Rogers, 2013) and content moderation studies (Gillespie, 2018; Roberts, 2019) to explore how ideas around what can and cannot be said with regards to hate speech and misinformation evolved on Twitter and YouTube issue publics and content moderation policies. In attending to these points, this project develops a preliminary thesis that, rather than attesting to a "deplatformisation" of the Web, we find interdependent relations between more and less moderated spaces as co-dependent “milieux” (Deleuze and Guattari, 1987, p. 313) shaped by the circulation of and engagement of moderated speech.
Last modified: | 26 September 2022 9.35 p.m. |