Partner Publication Alert 📢: Freedom of Expression in the Digital Public Sphere

by | May 4, 2021 | Partner updates | 0 comments

DiCED research affiliate Philipp Darius has recently co-authored a policy brief about freedom of expression online: “Freedom of expression in the digital public sphere: Strategies for bridging information and accountability gaps in algorithmic content moderation”

The purpose of this brief is to inform legislators and policymakers of the risks posed by the proliferation of algorithmic content moderation and the need for a more proactive regulatory approach by the states towards the governance of content moderation systems that are deployed by online platforms providing digital infrastructure for public discourse. It highlights an information gap that prevents regulators from evaluating the impact of content moderation on freedom of expression and an accountability gap that arises through the absence of effective redress mechanisms by which users are able to challenge violations of freedom of expression.

This policy brief forms one of the three outputs of the research sprint on AI and content moderation which was hosted virtually by the Alexander von Humboldt Institut für Internet und Gesellschaft (HIIG) from August to October 2020. 

Please find an executive summary of this policy brief below.

 

EXECUTIVE SUMMARY

A substantial portion of contemporary public discourse and social interaction is conducted over online social media platforms, such as Facebook, YouTube, Reddit, and TikTok. Accordingly, these platforms form a core component of the digital public sphere which, although subject to private ownership, constitute a digital infrastructural resource that is open to members of the public. As private entities, platforms can set their own rules for participation, in the form of terms of service, community standards, and other guidelines. The content moderation systems deployed by such platforms to ensure that content posted on the platform complies with these terms, conditions, and standards have the potential to influence and shape public discourse by mediating what members of the public are able to see, hear, and say online. Over time, these rules may have a norm-setting effect, shaping the conduct and expectations of users about what acceptable discourse looks like. Thus, the design and implementation of content moderation systems have a powerful impact on the freedom of expression of users and their access to dialogic interaction on the platform. With great power comes great responsibility: the increasing trend towards the adoption of algorithmic content moderation systems that have a questionable track record as regards their ability to safeguard freedom of expression gives rise to urgent concerns on the need to ensure that content moderation is regulated in a manner that safeguards and fosters robust public discourse in the online sphere.

0 Comments