Theorizing Platform Content Moderation: Power, Resistance, and Democratic Control
João C. Magalhães (University of Groningen); Naomi Appelman (University of Amsterdam)
Platform content moderation – how private digital intermediaries define and control what is objectionable and desirable – has emerged as a central contemporary form of mass speech governance, able to influence billions of people globally. Much of the growing scholarship focuses on describing its functioning, complexities, and technologies, and on trying to reform platforms by holding them to constitutional values. Yet, despite its obvious political nature, content moderation remains under-conceptualized as a (global) political practice.
This is puzzling as moderation rearticulates key concepts of political theory. Recent years made clear that platforms, regardless of their unilateral ability to moderate, often seek to appease some actors in the design and enforcement of their moderation rules and technologies. These processes are hardly linear, though: not all voices, from all countries, at all times, are heard. These dynamics have shown to reinforce systems of social and global oppression such as racism, sexism, or neo-colonialism in a way that is intimately connected to these companies’ global political economic interests. This evidences the need to understand how moderation relates to representation, recognition, and plurality, which are closely related with matters of justice, equality, and dignity. Similarly, it calls for understanding resistance to these systems as well as the patterns of in- and exclusion.
Two factors make these aspects challenging to understand or address through usual normative frameworks, such as legal rights. Firstly, platforms are a peculiar kind of organization: globally operating corporations with an oversized influence over the universal moral needs of socialization and individual expression. In other words, while their immense power is not anchored in usual processes of political legitimation (e.g., elections) or even a polity, and often remains legally protected by so-called ‘safe harbour’ laws, these companies still owe us something – but what, exactly, and how do we define this ‘us’? Further, much of content moderation today is automated through machine learning systems. The meaning of “objectionable” or “desirable”, or how to punish those who violate these definitions, may thus emerge not from direct human reasoning but from probabilistic calculations driven by complexly constructed datasets. Whose voice is represented and silenced when thousands of data annotators, moderators, officers, and technologists play some role in the construction of the algorithms that spot, say, hate speech? How to account for the cascading layers of rules, institutions, and actors?
This workshop aims to address the urgent task of theorizing platform content moderation. We especially welcome scholars working from the perspective of radical democratic theory, democratic resistance, decolonial theory, and political economy to consider three broad questions:
- How should we conceptualize content moderation as a form of power, and in which ways does it differ from previous forms of speech control?
- What does proper resistance to moderation mean, and how can it tackle the multiple dynamics of in- and exclusion? And
- To what extent and how should democratic control over content moderation be organised?
How to apply:
- 300 words abstract;
- Email to: email@example.com
- Deadline: 11th of June (any time zone).
- No full papers required.
+44 (0) 161 306 6000