Governing Artificial Intelligence


Markus Furendal (Postdoc The Global Governance of AI Department of Political Science, Stockholm University);

*All times are British Summer Time (BST)

September 9, 2021

Session I : Technical and theoretical aspects of AI 1

08.00 – 9.30

“Applying Measures of Artificial Intelligence to the Governance of Artificial Intelligence” – David Gamez (Middlesex University)

“AI as Black-Box: Phenomenological Reflections on the Crisis Posed by Opaque Machine Learning Methods” – Daire Boyle (Maynooth University)


Session II : Technical and theoretical aspects of AI 2

9.45 – 11.05

“Artificial Intelligence and Cultural Democracy” – Jonathan Gingerich (King’s College London)

Against AI Value Alignment” – Chelsea Guo (Oxford University)


Session III : Governance of AI 1

12.00 – 1.20 P.M.

“The Dominant Power of AI Designers” – Jonne Maas (Delft University of Technology)

“AI Ethics, Ethics Washing, and the Need to Politicize Data Ethics” – Gijs van Maanen (Tilburg University)


Session IV : Governance of AI 2

1.35 P.M.– 2.15 P.M.

“An Institutionalist Approach to AI Ethics: Justifying the Priority of Government Regulation over Self-regulation” – Thomas Ferretti (LSE)


September 10, 2021

Session V : Governance of AI 3

8.00 – 9.20

“The Basic Structure and AI Governance” – Theodore Lechterman (Institute for Ethics in AI, Oxford University)

The Global Governance of Artificial Intelligence: Some Normative Concerns‘” – Markus Furendal (Stockholm University)


Session VI : Governance through AI 1

9.35 – 10.55

“AI and the Question of Political Legitimacy” – Maria Nordström (The Royal Institute of Technology, Stockholm)

“Machine Learning and Democratic Legitimacy” – Karim Jebari, Jonas Hultin Rosenberg, Ludvig Beckman (Institute for Futures Studies, Uppsala University, Stockholm University)


Session VII: Governance through AI 2

12.00 – 1.20 P.M.

“Not So Silent Power? Artificial Intelligence, Algorithmic Governance and Three Concepts of Liberty” – Filip Biały (European New School of Digital Studies / Collegium Polonicum, Adam Mickiewicz University)

“AI, Opacity and Personal Autonomy” – Bram Vaassen (University of Cologne)


Session VIII : Governance through AI 3

1.35 P.M. – 2.30

“Automating Anticorruption? The Challenges of Integrating Algorithms and Office Accountability” – Emanuela Ceva & María Carolina Jiménez García (University of Geneva)


Political philosophers have recently started to address the moral and social implications of increasingly complex Artificial Intelligence (AI), including the way that the biases of AI systems might entrench existing injustices, and whether decisions made by opaque AI systems can be legitimate. Philosophers and engineers also debate how to best rein in a hypothetical future “superintelligent” AI system, and how to make sure that its values align with ours. Yet, countless other kinds of AI technology have already begun to be implemented throughout society, and even though many of them have potentially disruptive effects, there is still no systematic work addressing how the development and deployment of AI, more generally, ought to happen. The advent of AI technology is not merely a technical issue, and moral questions arise not only with regard to its applications. How AI is developed and deployed is also a political question, calling for collective decisions about what society ought to be like. For instance, the current AI boom has in part been enabled by publicly funded research and the collection of data about people’s daily behavior, but most AI technology is developed and owned by relatively few large corporations. Since AI technology holds a promise to be tremendously profitable for those who own it, this raises the question of whether this state of affairs is just, or whether the value and productivity growth created by the technology ought to be shared more widely. Similarly, AI technology might provide us with greater convenience and increased human capacities, at the cost of values like privacy or community. Current decisions about what kinds of AI technology to develop are mostly made by private companies, based on a market logic, but there may be more fair or democratic alternatives. Who ought to have a say in this, and what kinds of institutions would have to be developed to ensure that it happens? Serious discussion of questions like these arguably needs to be informed by theories of justice, democracy, freedom, and power.

The purpose of the panel is to gather academics from different career stages to address this relatively neglected part of the rapidly evolving literature on the social and ethical impact of AI. Papers concerning moral issues involving the application of AI technology will be considered, but the aim is to interrogate issues at a higher level of abstraction, concerning what the social and ethical impact of AI technology will be, and how it ought to be shaped politically. This includes, but is not limited to, addressing questions like what the societal implications of the quickly developing AI technology are and what they ought to be, how to conceptualize ownership of AI technology and who gets to profit from it, how legitimate decisions about the development of AI ought to be made, and how to design (global) institutions to govern AI technology.

Submission guidelines:
Please send an abstract of no more than 500 words to by May 10. The abstract should be prepared for blind review. Please include a separate document with the title of your paper, your name, e-mail address, and institutional affiliation (if applicable). Acceptance/rejection decisions will be communicated within two weeks. Speakers will be asked to submit complete papers before the workshop, which will be circulated among the panel participants.

Information about Mancept Workshops
Due to uncertainty about the pandemic, this year’s Mancept workshops will be fully online. Please note that all participants at this panel need to register for the Mancept workshops. Registration opens in May.

This year’s fees are:
Full price (employed academics e.g. lecturer, professor etc.): 45£
Discounted price (PhD/Master Student, unaffiliated academic, etc.): 20£
Non-presenting attendee: 15£
For more information about the Mancept workshops, see and please direct general queries to
Questions regarding the specific panel and the submission should be directed to

Markus Furendal
Postdoc, The Global Governance of AI Department of Political Science, Stockholm University