Governing Artificial Intelligence
Markus Furendal (Department of Political Science, Stockholm University)
Humanities Bridgeford Street Building: Room 1.70
As the development of increasingly capable Artificial Intelligence (AI) continues at a striking speed, questions around the social and ethical impact of AI technology – and how it ought to be shaped politically – are more pressing than ever. For instance, as government agencies and public institutions turn to automated or AI-assisted bureaucratic decision-making, AI begins to wield public authority. As complex generative AI systems begin creating text and images in an instant, writers and artists raise concerns about who can legitimately reap the rewards of applications that have been trained on data produced and processed by humans. Similarly, as AI is used to produce and spread misinformation, falsehoods, or micro-targeted, tailored ads, it is urgent to consider what kinds of (global) institutions can mitigate the expected impact this will have on the political process and generalized trust. After a period of fast AI development with little regulation, hard laws are now beginning to be rolled out, with the groundbreaking EU AI act expected to be passed in 2023. Still, most decisions about AI technology are made by private companies, fiercely competing in the market. Political philosophy may help in our search for better, more fair or democratic alternatives.
The ‘Governing Artificial Intelligence’ panel at Mancept 2021 gathered scholars interested in what was then a relatively neglected aspect of the rapidly evolving AI ethics literature. The political-philosophical debate on these issues has matured significantly in the last two years, and subfields have emerged, specializing in issues such as algorithmic bias, explainability, automated influence, and the challenge of aligning AI with human values. The panel is hence a follow-up event, organized by the same convenor, aimed at assisting this emerging research field in taking shape. What distinguishes this panel from other events concerned with AI ethics in general is that the purpose is not primarily to interrogate ethical concerns around particular AI applications. Rather, the goal is to raise more fundamental questions about how AI development and deployment ought to be shaped to begin with. In order to reach a higher level of abstraction, the panel primarily welcomes contributions that address the societal implications of AI technology, and how the governance of AI could be used to help steer us politically towards or away from particular outcomes.
Just as with the earlier panel, the purpose of the workshop is to gather academics from different career stages, and to help form a network of people interested in these issues, broadly conceived. In light of this the panel will primarily be a physical meeting in Manchester, but there are opportunities for speakers who are unable to travel to join remotely.
The time allotted to each paper is 1 hour, except for the first session on Sep. 12, which starts early and devotes 50 minutes to each paper. Each hour begins with brief remarks from the author (0-10 min), followed by prepared comments from another participant (15-20 minutes) and an open discussion (30-45 minutes). Markus Furendal will chair all sessions, unless other arrangements are worked out during the panel.
Please note that the panel will end at lunch on Wednesday on September 13.
|
|
11:00-12:30 |
Registration |
12:30-13:30 |
Lunch |
13:30-14:00 |
Welcome Speech |
14:00-16:00 |
Session 1: Theoretical and empirical aspects of AI and its governance Paper: Stella Fillmore-Patrick (remote) – What Are Black Box Models? Comment: Hugo Cossette-Lefebvre Paper: Maria Hedlund – Responsibility for AI development Comment: Lukas Albrecht |
16:00-16:30 |
Tea and Coffee Break |
16:30-17:30 |
Session 2: Applied AI governance Paper: Dane Leigh Gogoshin (remote) – What We Should Fear about AIs and What to Do about It Comment: Maria Hedlund |
17:45-19:00 |
Wine Reception |
19:30 |
Conference Dinner |
|
|
9:00-11:30 (Note, starting time) |
Session 2 (continued) Paper: Thomas Ferretti – Protecting Employee’s Privacy in the Age of Workplace Analytics: A Collective Choice Argument Comment: Anantharaman Muralidharan Paper: Markus Furendal – The Democratic Credentials of Non-State Actors in the Global Governance of Artificial Intelligence Comment: Thomas Ferretti Paper: Anantharaman Muralidharan – AI and the Need for Justification (to the patient) Comment: Maria Hedlund |
11:30-12:00 |
Tea and Coffee Break |
12:00-13:00 |
Session 1 (continued) Paper: Hugo Cossette-Lefebvre – Neither Direct, Nor Indirect: Understanding Proxy-Based Algorithmic Discrimination Comment: Ting-An Lin |
13:00-14:00 |
Lunch |
14:00-16:00 |
Session 3: AI governance and democracy Paper: Enrique Alvarez Villanueva (remote) – Regulating the apocalypse Comment: Stella Fillmore-Patrick Paper: A.G. Holdier (remote) – Cruel Optimism and AI Governance: Democratization as a Magic Concept Comment: Dane Leigh Gogoshin |
16:00-16:30 |
Tea and Coffee Break |
16:30-17:30 |
Session 3 (continued) Paper: Ting-An Lin (remote) – Rethinking “Democratizing AI”: Algorithmic Justice and The Path of Democracy Comment: A. G. Holdier |
|
|
9:30-11:30 |
Session 4: Theoretical frameworks for AI governance Paper: Aline Shakti (remote) – The strengths and weaknesses of the governance term for the regulation of AI Comment: Markus Furendal Paper: Anastasia Siapka – The Capability Approach to AI-driven Automation Comment: Aline Shakti |
11:30-12:00 |
Tea and Coffee Break |
12:00-13:00 |
Session 4 (continued) Paper: Lukas Albrecht & Hagen Braun – The Necessity and Possibility of Trustworthy AI Comment: Anastasia Siapka |
13:00-14:00 |
End of panel & Lunch |
14:00-16:00 |
Session 5 Speaker Name: Talk Title Speaker Name: Talk Title |
16:00-16:30 |
Tea and Coffee Break (optional) |
16:30-17:30 |
Session 5 (continued) Speaker Name: Talk Title |
17:30 |
End of Conference |