Independent and cutting-edge analysis on global affairs

Artificial intelligence is an umbrella term for a set of technologies that can solve complex problems independently. The most recent wave of AI systems builds on advances in machine learning from the early 2010s that allowed AI systems to improve in such a way that they perform better at an ever-increasing number of tasks. This has prompted continuing discussions of the societal impacts of AI leading to more than 170 ethical frameworks[1] and over 700 policy initiatives[2] on AI. These frameworks and initiatives show that AI not only poses risks and opportunities for individuals, but also has systemic and general consequences for democracy. AI impacts democracy on a variety of levels and in different ways.

AI’s Challenge to Democracy

AI’s most direct and visible influence is on public discourse especially in the context of elections. Social media platforms use content moderation algorithms that are regularly optimized to increase user engagement. These algorithms have been used to distribute fake news and misinformation in order to influence users. AI-powered bots can increase the range misinformation by spreading content.[3] AI microtargeting technology allow political advertisers to influence voter behaviour more granularly.[4] Users can be approached based on their individual characteristics. Thereby, voters can be more effectively manipulated. Regarding content creation, several technologies have facilitated creating manipulative information: Text generation models can produce manipulative written content,[5] while deep-fake technologies can be employed to alter pictures or to create videos in which public figures say things they have never actually said.[6] Taken together, AI can enable much more effective creation and distribution of misinformation, which has led to a great number of cases of digital election interference.[7]

There are also more general ways in which AI affects democracy, e.g. rebalancing power relations. AI-systems improve technologies and enable new uses and continues to surpass human capabilities in many areas ranging from complex games like Alpha Go and poker to tasks like foreign language translation. These advances in AI have fueled autonomous driving which has made decisive steps in the last decade.It is the same with robotics, whether it is in the military or in health care . Smart video surveillance has become very good at identifying humans and recognizing events like assault, car accidents, or demonstrations. AI has the potential to be an important factor for gaining and maintaining power in society. The Russian president Vladimir Putin famously stated at the Russian Children on Knowledge day that “whoever becomes the leader in this sphere will rule the world and it would not be desirable that this monopoly be concentrated in someone’s hands”.[8]

This rebalancing of power relations requires democratic legitimation. One aspect of this is that the private sector innovates without the same level of democratic accountability. In many areas the question arises under what circumstances the private sector is competent to decide issues about important ethical trade-offs. Can a private company be trusted to decide whether to put data protection concerns over fairness or vice versa? 

The democratic issue arises from the fact that innovation is possible in many areas of society and in relation to many uses of AI.

There is yet another important democratic question that often relates to so-called general-purpose technologies, that is pervasive technologies allowing for improvement of other technologies and innovation in many sectors of society. The democratic issue here arises from the fact that innovation is possible in many areas of society and in relation to many uses of AI. This begs the question how innovation means like funding and human resources should be allocated to different purposes like economic competitiveness, sustainability or inclusion of disabled persons. AI could potentially serve all these purposes, but the resources to do so are limited. This requires choices that again call for democratic justification.

The (Lacking) Democratic Impetus of the EU Commission’s Proposal 

Policy discussions ensued in many countries. One of the processes receiving the most attention was the regulation of AI in the European Union. Member states issued a common declaration in early 2019; the Commission followed suit with an AI strategy that entailed regulation as one measure. When the new president of the Commission, Ursula von der Leyen, entered into office, she made the regulation of AI one of her top priorities, announcing the following in her candidacy speech for elections to the European parliament:

“In my first 100 days in office, I will put forward legislation for a coordinated European approach on the human and ethical implications of Artificial Intelligence. This should also look at how we can use big data for innovations that create wealth for our societies and our businesses.”[9]

The Commission followed up with a whitepaper[10] and an ensuing stakeholder consultation. While the whitepaper mentioned many On 21 April 2021, the European Commission published its proposal for an Artificial Intelligence Act(PAIA), which set forth a general regulation of AI through a risk-based approach. The proposed regulation outlaws certain uses of AI completely, while requiring more detailed regulation for high-risk applications and certain transparency requirements for limited-risk applications. The proposal also included rules on enforcement including an obligation for member states to designate a national supervisory authority and the establishment of the European Artificial Intellgence Board. 

It remains to be seen whether such a law can indeed deliver on the promise of realising AI’s potentials while mitigating societal harms. What is significant is that the PAIA offers only limited measures relating to the protection of democratic processes and to democratic inputs. As already mentioned, most parts of the regulation only apply to high risk systems which are defined in Art. 6 PAIA. While there is a reference to a list of AI systems “for the administration of justice and democratic processes” in Annex VII, this list so far only contains AI systems used in the judicial context. This shows that the legislator will keep an eye on AI in the context of democratic processes, but has not yet spoted important systems. What is even more significant is the particular role users play in PAIA. 

PAIA extends obligations also to users of high risks systems according to Art. 29 PAIA. They have a direct obligations for responsible use of high-risk AI systems. However, their opportunities to influence the governance of AI are limited and scattered around the act. Art. 3 Nr. 25 defines post-market monitoring as collection and review of experiences gained from the use of AI systems, which includes feedback of users. Similar factors are important to determine potential harm of a high-risk system according to Art. 7 Sec. 2 (c), which refers to reports or documented allegations. According to Art. 9 Sec. 4 PAIA residual risks have to be communicated to users. The obligation concerning transparency in Art. 13 PAIA defines its purpose “to enable users to interpret the system’s output and use it appropriately”. Likewise, Art. 14 emphasizes that humans must be capable of exercising meaningful oversight over AI systems.Art. 69 provides for further codes of conduct “intended to foster the voluntary application to AI systems other than high-risk AI systems”. Section 3 states that “[c]odes of conduct may be drawn up by individual providers of AI systems or by organisations representing them or by both, including with the involvement of users and any interested stakeholders and their representative organisations…”. This overview shows that the scope of PAIA concerning participatory elements is very limited. 

Policy Options to Further Democracy in Regulation

The European Commission has for many years lauded the importance of responsible research and innovation, but it has only included a few participatory elements into the regulation. Therefore, I would like to highlight seven ways how to further democratic and participatory practices through PAIA. 

Meaningful User Participation

One of the most important lessons to be learned from practices of responsible research and innovation is to allow users to participate meaningfully in technology design. Especially in the case of emerging technologies, the question of what constitutes a risk often relates to the constituency that defines those risks. The same is true for taking decisions on how to design an AI system. To work with diverse groups can broaden the understanding of what is at stake especially when dealing with high-risk applications. To include such participation clauses in provisions like Art. 9 and to make the fully-fledged obligations would strengthen democracy and provide for direct feedback in case of tricky development decisions.

Rethink Standardisation

One major element of co-regulation is to provide for a regulatory framework that has to be brought to life by standards. According to Art. 40 PAIA, high-risk AI systems that are in conformity with harmonised standards shall be presumed to fulfill the requirements of PAIA. This confers significant authority on these standards. While standards in the area of AI specify legal and ethical requirements, they are mostly the product of experts’ meetings with very little possibilities for the public to weigh in. Legislators are, however, the ones qualified to determine what is a valid standard. PAIA might be a good opportunity to strengthen participation in the context of standardisation. 

Meaningful Inclusion of Democratic Checks and Balances

The topic of standardisation also highlights the question on how to involve representative democratic institutions. Especially regarding rapidly developing technologies, there might be opportunities to bring in the EU Parliament or the Council of the EU on occasions to develop PAIA further. 

  1. Metagoverance

Another way to enhance pro-democratic uses of AI is to look for areas in which European or member state institutions can provide for meaningful examples or push standards of democratic AI. This form of influence is conceptualized as Metagovernance.[11] It is a very important concept to influence governance by setting good examples. One idea in that regard would be to further best practices of content moderation in the context of public broadcasting platforms or to look for other areas in which AI is used by public entities. 

  1. Sandboxing for Democracy

Art. 53ff. PAIA provide for the governance of sandboxes in which regulatory requirements are lifted. One important aspect is that sandboxes are restricted to systems having specific purposes such as security, public health or environmental protection (Art. 54 Sec. 1 (a)). This illustrates how privileges can be granted to certain topics to encourage innovation. In addition, this raises the question of whether legislators should also include practices that enhance democracy on this list. 

Incentives for Democratic Determination

There are also other ways to stifle innovation. Public funding can be a huge incentive. Technology clauses like Art. 4 Section 1 (g) of the UN Convention on Persons with Disabilities shows that there can be soft obligations for states to prompt research, development and adoption of technologies in certain regards. Such a clause might also motivate such democratic innovations in the case of AI. 

  1. Designability Principle: Focusing on Choices

Regulation only makes sense when it can be implemented on the ground. This prompts the question how democracy as a legal principle can be turned into concepts that are also meaningful to developers of technologies.[12] One idea is to focus on a principle of designability that translates democratic ideas into specific requirements that developers and engineers can realise. It ought to have at least two tiers that need to be addressed by developers: The first tier is the changeability of the system. The second tier is its intelligibility.Democracy is based on the idea that it is adaptable and open to change : changes in government, changes in opinion after an informed discourse and so on. This is particularly the case if there is uncertainty about how a decision plays out in practice. In such a situation, changeability is a requirement for democratic participation. Yet, such changeability has to be enhanced by design. This requires choosing a specific architecture or using specific methods. Considering that machine learning entails the possibility to adapt, it is changeable by definition. Another tier for designability is the intelligibility of the system. Intelligibility is not used in its general sense in computer science that is the possibility to understand the logic behind a given system’s actions. Intelligibility must be constructed democratically. An overall goal might be to make a system understandable to all people who are affected by its actions. While not everybody will in effect decide upon whether and how to employ the respective AI system, the ideal would be that everybody should have the chance to. 

A democratic governance of AI means that there have to be inputs from various stakeholders and democratic processes informing how to steer an evolving technology and its societal impacts.

Concluding Remarks

Especially in cases of emerging technologies that still need to be explored, furthering democracy cannot only mean to produce outcomes everybody generally agrees on. A democratic goverance of AI means that there have to be inputs from various stakeholders and democratic processes informing how to steer an evolving technology and its societal impacts. I have outlined seven policy options to include in PAIA to increase democratic input and ensure democratic processes. Should PAIA drafters be serious about advancing democracy, they should offer substantive revisions, as democracy demands action instead of mere words, and processes instead of mere rhetoric. 


[1] Algorithmwatch, "Ai ethics guidelines global inventory," 1 March 2022, https://inventory.algorithmwatch.org

[2] OECD.AI, "Database of National AI Policies," 1 March 2022, https://oecd.ai/en/dashboards

[3] Richard L. Hasen, "Deep fakes, bots, and siloed justices: American election law in a 'post-truth' world," Saint Louis University Law Review, Vol.64 (2020): p. 535–568.

[4] Solon Barocas, "The price of precision: Voter microtargeting and its potential harms to the democratic process," in Proceedings of the first edition workshop on politics, elections and data (New York: Association for Computing Machinery, 2012): p. 31–36.

[5] Will Knight, "Ai can write disinformation now—and dupehuman readers ai can write disinformation now—and dupehuman readers: Georgetown researchers used text generator gpt-3 to write misleading tweets aboutclimate change and foreign affairs. People found the posts persuasive," Wired, 1 March 2022, https://www.wired.com/story/ai-write-disinformation-dupe-human-readers

[6] Prajakta Pradhan, "Ai deepfakes: The goose is cooked?," Illinois Law Review(4 October 2020), https://www.illinoislawreview.org/blog/ai-deepfakes/

[7] "Digital Election Interference," Freedom House. https://freedomhouse.org/report/freedom-on-the-net/2019/the-crisis-of-social-media/digital-election-interference

[8] Radina Gigova, "Who Vladimir Putin thinks will Rule the World," CNN, 2 September 2017. https://edition.cnn.com/2017/09/01/world/putin-artificial-intelligence-will-rule-world/index.html

[9] European Commission, Directorate-General for Communication and Ursula von der Leyen, A Union that strives for More: My Agenda for Europe: Political Guidelines for the next European Commission 2019-2024 (Publications Office of the European Union, Luxembourgh, 2019).

[10] European Commission, White Paper on Artificial Intelligence: A European Approach to Excellence and Trust (COM/2020/65 final, Brussels, 2020).

[11] Jonna Gjaltema, Robbert Biesbroek and Katrien Termeer, "From Government to Governance…to Meta-Governance: A Systematic Literature Review," Public Management Review (2019): p. 1–21.

[12] Christian Djeffal, "Ai, democracy, and the law," in Andreas Sudmann (ed.), The Democratization of Artificial Intelligence: Net Politics in the Era of Learning Algorithms (Transcript, Bielefeld, 2019): p. 255–284, 270ff.

CONTRIBUTOR
Christian Djeffal
Christian Djeffal

Professor Dr. Christian Djeffal holds the Professorship for Law, Science and Technology at Technical University of Munich. 

Foreword The rapid pace of geopolitical change, the urgent necessity for sustainability, and the fundamental importance of energy security converge to shape our complex global landscape today. This issue of Transatlantic Policy Quarterly delves into "Change, Security, and Sustainability in Energy," offering insights from scholars and professionals on how regions and nations are navigating this...
STAY CONNECTED
SIGN UP FOR NEWSLETTER
PARTNERS