The digital revolution is having a profound impact on society and democracy as a whole. Our democratic lives and practices have already been affected, and it may have the potential to change how we understand and practice democracy in the next five to ten years.
Our goal at Democratic Society is to move away from binary discourses that depict digitalization, big data, algorithmic systems, and AI as either the source of all problems or as the solution we have all been waiting for. In fact, these polarized approaches risk remaining politically inert. We need to take a more nuanced and critical approach that takes into account what we know about AI and democracy (and its issues) and also re-imagine ways to rethink how AI could be used for the public good.
Taking the well-known and often quoted Kranzberg's Law of Technology as our point of departure, we think that artificial intelligence (or 'technology' in the original) is neither good nor bad; nor is it neutral. As we have observed for many years, the current impact of digitalization and use of AI in public services is full of threats and challenges for democracy, but the field is evolving and interesting trends are emerging (among others around public algorithmic accountability[1]or on Trustworthy AI[2]) that deserve a chance to be tried out and perhaps succeed.
Beyond Binary Definitions
Using digital technologies to facilitate more participative forms of democracy has reached a level of maturity never seen before. Our observations are that digital tools are being used as amplifiers of more traditional forms of participation (such as through digital activism) as well as the emergence of new tools for 'digital democracy’. When it comes to assess the impact of AI on democracy and our political lives, it seems that there is no real consensus on what the impact of AI really is and we are facing a very ambivalent discourse.[3] Despite widespread access to tech (and AI specifically) for leisure and consumption, the use of AI as a tool for civic purposes and the critical debates about how it should be designed and improved will still be limited or unavailable to all. In what ways is AI challenging democracy and what are the areas we should be monitoring to ensure AI (e.g., through AI-driven communication platforms used to facilitate public debates) is not negatively impacting our democratic spaces and deliberation? As such, there is no consensus on what the threats are or what a potentially positive impact of AI on democracy might look like.
The European Commission’s Democracy Action Plan[4] (published in 2020), focuses, for example, on threats to the integrity of elections, the need for democratic participation, and the need to counter disinformation. Others concentrate on additional democratic elements that are potentially challenged by the widespread use of AI and namely: the role of active citizenship, our trust in public authority and the role of public dialogue.[5] An interesting and more recent body of literature from deliberative democracy[6] is looking at the impact of AI on our public spaces intended as ‚communicative spaces’ (highlighting the impact of AI on content display and content moderation).
All of these concerns are worthy of exploration, and they all seem to extend the dangers of AI for democracy beyond electoral interference. AI is becoming more and more mainstream, and as such, we need to examine broadly how it impacts our democratic politics, how we interact with each other and with our elected officials, and how we understand our role and our rights as citizens.
AI In the Public Space
AI's increased use in public spaces (for example through facial recognition, automated decision making, surveillance, and prevention) has brought both new and old types of challenges to democracies and to citizens' participation. AI-based inequalities are arising as a result of biased AI that impacts how data is collected, how our data is used, and how AI is used to make life-changing decisions such as access to the welfare state, employment opportunities, etc.
As scholars noted “AI is not merely another utility that needs to be regulated only once it is mature; it is a powerful force that is reshaping our lives, our interactions, and our environments.”[7] Due to these factors, it should not be a surprise how much debate there is regarding AI accountability and 'Trustworthy AI'. Researchers, civil society organizations, and citizens increasingly (and rightfully so) are calling for more public scrutiny, regulations, and clear responsibilities to ensure that AI advances democracy. An AI 'for good' is one that is grounded in a set of values that society can consider desirable, such as democracy, justice, privacy, human rights, and environmental protection.[8]
Citizen engagement and public oversight (the capacity of the public to together form opinions, debate, and make decisions on key areas of public life) are two essential elements of a healthy public space and a democratic society.
Citizen engagement and public oversight (the capacity of the public to together form opinions, debate, and make decisions on key areas of public life) are two essential elements of a healthy public space and a democratic society. Throughout our work at Democratic Society, we've learned that participation is complex, and that simply inviting citizens to take part in a public debate is not sufficient to ensure true participation. Besides providing opportunities for the public to be engaged, we must ensure that they have the information and confidence to be meaningfully engaged. In order to examine what are the implications of an increased use of artificial intelligence in the public sphere, we will go through each of the stages (opportunity, information, and confidence).
We Need to Talk About AI
Digitalization and the field of Artificial Intelligence are among the areas where the need of rethinking participatory and engagement mechanisms in new ways has become prominent.[9] Until recently, citizens' interest and experience in the field of AI were predominantly described negatively: as a "lack" of interest or as an "information deficit." Thus, discussions and policies surrounding this issue are often conducted behind closed doors by experts and technologists.
A recent study on the development of 16 national AI Strategies[10] defined public engagement as an increasingly prominent heuristic for the development and deployment of responsible and ethical AI. The study highlights an interesting trend in how public engagement in this field is framed around possible harms that AI can generate, either through attempts to prevent such harms or to deal with existing negative outcomes.
Re-thinking Public Engagement on AI
As far as AI is concerned, citizens have much more to say: new questions need to be posed in order to rethink participatory processes on such a value-laden subject. First, participation should not focus only on the question of howAI should be used, but it should keep open the question of whether and when AI should be deployed in the first place.[11]
Secondly, those who are responsible for overseeing the participatory AI project should reveal who will participate in the process and what questions will be asked in order to encourage actors to engage in a more in-depth process. Third, participation in discussions related to artificial intelligence should cover the different technological stages, starting with design, and moving on to development, implementation, and monitoring. Civic engagement in the design of AI will allow to go beyond value-focused design and to ensure that principles of justice and inclusivity of usually marginalized groups are actually included from the onset.[12]
By allowing the public to engage in digitalization processes and the use and impact of AI, we can create an AI ecosystem that better reflects social values, preferences, and needs, and anticipate and account for possible AI's negative impacts on human rights and inclusion issues.
Challenging Opacity in Digital Technologies
Making citizens' engagement meaningful in all fields depends on their access to the most relevant information on the topic they are asked to discuss. A balanced view of information is vital for the public to form their own opinions and to change those opinions when confronted with solid arguments and information that support alternative views. It is one of the tenets of deliberative democracy.
When it comes to engage the public on matters of AI, one of the main barriers lie in the fact that this is an ‘opaque’ technology, a so-called ‘black box’.[13] This means that it is opaque in its design and in its main components, opaque in the way in which it can constantly evolve through machine learning, and opaque in the way certain inputs translate into certain outputs. As a few scholars have interestingly noted: “Paradoxically, the more AI matters, the less one may be able to realize how much it does.”[14]
Towards Explainable AI?
Automatic Decision Making (also called ADM) illustrates this opacity well. With ADM, algorithms and artificial intelligence are used to automate decision-making processes by analyzing vast amounts of data without (or in some cases with minimal) human involvement. These ADM systems - often poorly designed and rarely discussed publicly - provide little insight into how they reach their conclusions. They work outside of known mechanisms for accountability, and yet they currently “affect almost all kinds of human activities, and, most notably, the distribution of services to millions of European citizens – and their access to their rights.”[15] As an organization that has been monitoring and assessing the use of ADM in public services in the past years, Algorithm Watch acknowledges the potential benefits of using ADM systems in processing increasingly vast amounts of data, yet has found that in reality, there are very few examples of ADM systems having such an impact.
Since these systems have started becoming more and more visible - their existence, how they work and the impact they make - many have started to speak out against the potential infringements of rights they have caused. In a UN Report from 2019,[16] Professor of Law and Human Rights, Philip Alston, harshly condemned the use of ADM and Algorithms in the Welfare System in the UK. Among its merits, the report shed light on the extent of the use of algorithms in the UK welfare system, as well as opening a debate about what these systems are and what harm they might cause.
A Proactive Approach to Participation
Unless we rethink the way that public scrutiny is done in this field of digitalization and AI, public participation and public opinion can only react to an ADM system only once it has already been implemented, and only after an infringement has already occurred. In a proactive approach, public oversight and participation would be embedded throughout the entire technological cycle, as well as confronting the question of whether (and not simply how) AI solutions should be developed in the first place.
An interesting approach to address this question of opacity is coming from the fast-growing field of XAI (or Explainable AI).[17] Yet, the lack of concrete examples of XAI and the fast pace of technological advancement in the field threatens to undermine the democratic ability to understand and assimilate these new technologies, as well as the possibility of putting private interests and players under public scrutiny.
The Confidence to Trust
Trusting or not trusting so-called 'trustworthy' AI is frequently not a choice, since those technologies are sometimes introduced in nontransparent ways or not communicated at all. Furthermore, it is not a choice that each and every one of us can fully and equally make. It has been in fact underlined how certain groups from certain background and social classes-who are for instance more reliant on public and social services - will have less choice and control over the data they will have to share.[18] Many others are also forced to trust AI since they do not possess the critical digital literacy needed to construct critical and independent opinions.
Trusting or not trusting so-called 'trustworthy' AI is frequently not a choice, since those technologies are sometimes introduced in nontransparent ways or not communicated at all.
Recognizing and Claiming Digital Rights
In a recent project we did at Democratic Society,[19] for the citizens in five European cities, we engaged in, digital literacy was an important topic. Residents describe it as the ability to take advantage of digital opportunities, whether using bank services or surfing the web while remaining secure and empowered. Many felt they did not have sufficient digital skills and competencies to both maximize technology and have their rights upheld. The acquisition of good digital literacy across the general population should be considered a prerequisite for advancing public scrutiny of AI and technology. Citizens must develop the knowledge and be able to recognize where and when their digital rights have been violated and how to raise an appeal for justice in order to gain confidence in themselves and in AI technology that they can understand.
The lack of trust among the public could be attributed to a lack of transparency, knowledge, and control over what happens through technologies in our private lives and public spaces. That lack of confidence can have profound impact on citizens’ attitude towards technology and AI and even create a feeling of ‘apathy’, which does not equate to consent to the status quo or disinterest, but that speaks to a condition of loss of agency.[20]
As proposals and initiatives for how to address the issue of critical digital literacy[21] are emerging, it is important to create a bridge between organizations, educational institutions, and governments working in this area so that their efforts can be amplified and a conducive environment can be created for the advancement of digital rights and digital critical skills.
Concluding Remarks
Since the impact of digitalization on our democratic lives has become so tangible, better and more comprehensive public oversight and engagement are crucial. We need to create methods and guidelines to equip more people in Europe (and beyond) with knowledge, skills, and pathways to use AI for the public good in collaboration with experts and citizens. We also need to work together with civil servants and elected officials to ensure that open and participatory governance can lead to better technology governance.
We are focusing on three major areas at Democratic Society to contribute to this direction. To ensure citizens have the opportunity to participate, we are supporting governments at different scales to design and implement innovative participation models. We also strive to make AI more understandable so that it can be trusted by providing the public with accurate information. Lastly, we want to ensure that everyone feels comfortable speaking out about the use of AI in the public sphere and advocating for their own digital rights.
[1] Ada Lovelace Institute, AI Now Institute and Open Government Partnership, Algorithmic Accountability for the Public Sector. (2021). Available at: https:// www.opengovpartnership.org/documents/ algorithmic-accountability-public-sector/
[2] Mozilla Foundation. Creating Trustworthy AI: a Mozilla White Paper on Challenges and Opportunities in the AI era (2020).
[3] Birgit Schippers, “Artificial Intelligence and Democratic Politics,” in Political Insights, Vol. 11, No. 1 (2020): p. 32-35.
[4] The Action Plan is available to download here: https://ec.europa.eu/commission/presscorner/detail/en/ip_20_2250
[5] Jamie Bartlett, The People vs Tech: How the Internet is Killing Democracy (and How We Save It) (London: Ebury Press, 2018).
[6] Nardine Alnemr, 2020. “Emancipation cannot be Programmed: Blind Spots of Algorithmic Facilitation in Online Deliberation,” Contemporary Politics, Vol. 26, No. 5 (2020): p. 531-552, DOI:10.1080/13569775.2020.1791306; Alexander Buhmann and Christian Fieseler, “Towards a deliberative framework for responsible innovation in artificial intelligence,” Technology in Society, Vol. 64 (2021). https://doi.org/10.1016/j.techsoc.2020.101475; Eleonore Fournier-Tombs and Michael K. MacKenzie, “Big data and democratic speech: Predicting deliberative quality using machine learning techniques,” Methodological Innovations (2021): p. 1–15, DOI: 10.1177/20597991211010416
[7] C. Cath, S Wachter, B Mittelstadt, M Taddeo, and L Floridi, “Artificial Intelligence and the 'Good Society': the U.S., EU, and UK Approach,” Science and Engineering Ethics, Vol. 24, No. 2 (2017): p. 505–28.
[8] Huw Roberts et al., “Achieving a ‘Good AI Society’: Comparing the Aims and Progress of the EU and the U.S.,” Science and Engineering Ethics, Vol. 27 (2021): p. 68, https://doi.org/10.1007/s11948-021-00340-7
[9] Mona Sloane et al., “Participation Is Not a Design Fix for Machine Learning,” (2021).
[10] Christopher Wilson, “Public Engagement and AI: A Values Analysis of National Strategies,” Government Information Quarterly, Vol. 39, No. 1 (2022). https://doi.org/10.1016/j.giq.2021.101652
[11] Fernando Delgado et al., “Stakeholder Participation in AI: Beyond “Add Diverse Stakeholders and Stir,” ArXiv:2111.01122 [Cs] (November 2021). http://arxiv.org/abs/2111.01122
[12] Sasha Costanza-Chock, Design Justice (MIT Press, 2020).
[13] Frank Pasquale, The Black Box Society: The Secret Algorithms That Control Money and Information (Harvard University Press, 2015).
[14] Corinne Cath et al., “Artificial intelligence and the ‘good society’: The U.S., EU, and UK approach,” inScience and Engineering Ethics, Vol. 24 (2017): p. 505-528, DOI 10.1007/s11948-017-9901-7
[15] Fabio Chiusi, Sarah Fischer, Nicolas Kayser-Bril and Matthias Spielkamp, Automating Society Report, (2020), AlgorithmWatch. Available online at https://automatingsociety.algorithmwatch.org
[16] United Nations - Human Rights Council, Visit to the United Kingdom of Great Britain and Northern Ireland, Report of the Special Rapporteur on extreme poverty and human rights, (2019).
[17] Derek Doran et al., “What does explainable AI really mean? a new conceptualizationof perspectives,” in arXiv:1710.00794 (2017) and Andreas Holzinger, “Explainable AI (ex-AI),” Informatik Spektrum, Vol. 41 (2018): p. 138–143, https://doi.org/10.1007/s00287-018-1102-5
[18] Virginia Eubanks, Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor (St. Martin's Publishing Group, 2018).
[19] https://www.demsoc.org/blog/launch-of-the-citizen-voices-for-digital-rights-cvdr-report
[20] Lina Dencik and Johnathan Cable, “The Advent of Surveillance Realism: Public Opinion and Activist Responses to the Snowden Leaks,” International Journal of Communication, Vol. 11 (2017): p. 763-781.
[21] Soledad Magnone, Critical digital education for all (2022) available at https://points.datasociety.net/critical-digital-education-for-all-adbf1ab82e17 and Ina Sander, What is critical big data literacy and how can it be implemented?,” Internet Policy Review, Vol. 9, No. 2 (2020), DOI: 10.14763/2020.2.1479