Independent and cutting-edge analysis on global affairs

Data Science and Artificial Intelligence (AI), combined with other technologies such as internet platforms, smart infrastructure, and blockchain, suggest a societal shift comparable to the Industrial Revolution. The capacity to create new types of knowledge and business models[1] has already impacted all aspects of our lives, from the way we live and work at individual level to the way our macro-political systems and processes operate. Yuval Noah-Harrari in his best-selling book Homo Deus: A brief history of tomorrow, draws attention to the basic relationship between the economic and military relevance of an individual and the evaluation of our current democratic models. He points to the fact that new business models created after the Industrial Revolution (e.g. factories with assembly lines) essentially led to the emergence of a ‘working class’ that could disrupt economic activity when needed, and thereby gain the ability to negotiate improvement in their living conditions and democratic rights. Similarly, many of the Western countries recognized women’s voting rights and participation in democratic processes only in the past century –and that was also not entirely out of goodwill. The feminist movement and women’s fight for their rights were largely enabled by increasing female agency in economic and military processes in the new world order. The question then becomes: how will the new capabilities granted by AI and associated technologies change individual contributions to society, and how will these developments affect societal power and dominance?

The Fundamental Challenges Around AI Threatening Democratic Systems and Values

The intelligence attributed to machines so far is largely due to their capacity to ‘learn’ directly from data without explicit human programming or established theoretical models.[2] The main concern is the opacity of this ‘learning’ process, which limits the legitimacy of its outcomes when utilized in socio-economic contexts. Unexplainable decisions in policing or criminal sentencing are unacceptable by any modern democratic standards. In recent years, experiences have also revealed another major challenge: the data-based nature of these technologies reproduces and amplifies the known and unknown problems associated with human decision-making. Hiring algorithms have already started to discriminate against women for technical jobs,[3] and systems designed to support public safety and security have already started to punish minority groups in unfair ways.[4] The ability of AI-based systems to personalize decisions also allows targeting and discriminating against individuals in unique and untraceable ways, and hence manipulating populations at unprecedented scales. When this power is exercised during an election campaign, questions become raised as to the free will of individual voters, their equal participation in the process, as well as their freedom of expression and information - putting the very foundation of a healthy democratic system at risk. On this last point, the Facebook-Cambridge Analytica scandal deserves a special mention.[5] The automation of services also means radical changes in the job market. Physical manpower and repetitive professional tasks are increasingly handled by algorithmic processes. New jobs emerge, but they require more advanced training and skills. Even though living standards are likely to improve as a result of these developments, there is also growing concern that inequalities in wealth distribution are likely to worsen.

In a macro sense, the functions and legitimacy of nation-states as the basic structures of our current world order are increasingly challenged. “In a lot of ways, Facebook is more like a government than a traditional company –we have this large community of people, and more than other companies we are really setting policies,” said Mark Zuckerberg in 2017.[6] The same can be said for other major internet platforms today, which are also spearheading and dominating the AI space. Our lives are increasingly shaped by the products of these giant tech companies, allowing them to create the rules of engagement for populations much larger than existing nation-states. Today, Google controls 90 percent of the global search engine market and has nearly 4 billion global users,[7] and Meta Platforms Inc. (formerly Facebook) has 3.6 billion active monthly users.[8] Such profit-driven companies are increasingly making public decisions behind their intellectual property protections, rather than democratically elected governments that are accountable to their citizens. 

The growing skills gap between public and private sectors also means governments are rapidly losing their understanding of the technology, and hence their ability to effectively regulate this space.

Nation-states and international organizations are also falling behind in their innovation and regulatory capacity in the AI space. As opposed to technology companies focusing on low-risk domains like marketing and advertising, governments are often involved in highly complex areas like public health, security, migration and immigration, public resource allocation, and city management. The growing skills gap between public and private sectors also means governments are rapidly losing their understanding of the technology, and hence their ability to effectively regulate this space. Distinct ecosystems emerging in AI-based technology development also uncover new challenges. The U.S.’s open market approach, empowering multinational digital corporations, contrasts with greater central government control in the Chinese digital ecosystem, and radically differs from Europe’s priority on individual rights, personal privacy, and digital sovereignty. The Global South remains underrepresented in the global debate, and effective international collaboration structures are lacking. 

The ‘Algocracy’ Debate

In 2020, algorithms received a major public backlash in the UK when the public exams’ regulator Ofqual used one to predict results of school exams that had been cancelled due to Covid-19 measures.[9] Teenagers rallied in London, demanding political attention to the disparate effect the Ofqual algorithm has had, particularly on talented students from disadvantaged backgrounds. Although it can easily be argued that it was actually a failure of policy rather than a failure of the algorithm itself,[10] the outcry led to stronger public misgivings about the use of algorithmic decision components in governance operations.

In spite of growing genuine public and policy concerns, 'learning' algorithms are trusted more and more with everyday operations and important public decisions. The term ‘algocracy’ coined by sociologist A. Aneesh in 2006[11] fits well to capture the essence of such new governance forms. Automated and primarily data-driven decision components in governance are fundamentally transforming the way our democratic process work, citizen services are provided, and justice is delivered. Power is increasingly exercised ex ante via code, requiring lower levels of commitment from governing actors;[12] and ‘learned’ algorithms increasingly substitute judgement exercised by identifiable human subjects who can be held to account.[13] An important debate topic is how to balance algorithms' governance and human rights, and for how much autonomy we can grant such opaque decision-making systems in complex socio-economic contexts, affecting individuals both in the known and unknown ways. 

Can We Advance Our Democracies with the Help of Algorithms?

Techno-utopian perspectives see great potential in using algorithmic support to overcome the imperfections of human and institutional decision-making and faulty forms of knowledge.[14] Critics, on the other hand, draw attention to the potential harms algorithms can cause by automating and amplifying the problems and imperfections we know - or don’t know- to exist in human/institutional decision-making. A utilitarian approach would be to embrace the possibilities but with a lot of caution in light of the irreversible shift of everyday activities and the ever-growing reliance on algorithmic mediation. 

In terms of the comparison to the Industrial Revolution, we are here again at a historical crossroad, where the biggest challenges of the modern era can be addressed in previously unimaginable ways.

In terms of the comparison to the Industrial Revolution, we are here again at a historical crossroad, where the biggest challenges of the modern era can be addressed in previously unimaginable ways. New kinds of 'intelligence' based on artificial systems do hold the potential to address some of our chronic problems, including social inequality and racial injustice in democratic processes. The prerequisite is that these systems should be developed and deployed responsibly by legitimate actors and prioritize public/individual interests – a point that Stuart Russell captures well with his ‘benevolent machines’ concept.[15]a I believe the main components in this direction include, but are not limited to, the following: 

  • AI research in the public interest should go beyond reactive technology regulation to proactive design of new socio-technical processes, reframing good democratic governance holistically. Principles such as equity and fairness, participation, the rule of law, effectiveness, accountability, and human rights should be reformulated accounting for algorithm agency in the process, along with existing hierarchies, bureaucracies, and networks of governance;
  • True ‘transdisciplinary’ in research should be achieved to unify knowledge around common problems by breaking the conventional boundaries between disciplines (e.g., computer science, system design, politics, philosophy, laws, anthropology etc.) and sectors (public, private, and voluntary) in order to share knowledge and co-produce approaches to address complex societal problems;
  • We need to rethink the process of designing AI systems - algorithmic learning goals and the resulting actions should be free of imperfections of human/institutional decision making (such as bias, prejudice) and should be adaptable to the best interests of humans and the planet in light of dynamic issues;

AI technology should be held to higher legal and regulatory standards and monitoring and enforcement tools should be developed for human-machine hybrid environments. A fair algorithmic insight, transparency and explainability of the process, robustness, stability, and interpretability of such mechanisms are all critical elements of building trust in AI systems for usage in complex socio-economic contexts.

The digital revolution, empowered by AI and related technologies, is radically reshaping our democratic institutions and processes, from how elections are conducted, and justice is delivered, to how individuals form opinions and contribute to the economy. It is increasingly the private sector that controls public perception and interactions, with its primary profit incentives and limited accountability to the public, leaving the public sector behind in democratic countries. Governments are more resistant to embracing the innovation potential of these developments, given that their high-stakes decision contexts place a heavy emphasis on public safety and security. With the growing skills gap between the private and the public sector, governments are also losing their ability to effectively regulate this space. Unless there are fundamental breakthroughs in how AI systems are developed and deployed, as well as how the ecosystems that shape and accommodate these innovations are constructed, there is a risk of completely missing out on significant opportunities to advance democratic systems and values in the future. 


[1] Zeynep Engin and Philip Treleaven, “Algorithmic Government: Automating Public Services and Supporting Civil Servants in using Data Science Technologies,” The Computer Journal, Vol. 62, No. 3 (March 2019).

[2] Although this is mainly associated with Machine Learning (ML), conventionally the broader set of AI technologies are referred to as the ‘data-driven’ technologies.

[3] Jeffrey Dastin, “Amazon scraps secret AI recruiting tool that showed bias against women,” Reuters, 11 October 2018.

[4] Julia Angwin, Jeff Larson, Surya Mattu and Lauren Kirchner, “Machine Bias: There’s software used across the country to predict future criminals. And it’s biased against blacks,” ProPublica, 23 May 2016.

[5] Rosalie Chan, “The Cambridge Analytica whistleblower explains how the firm used Facebook data to sway elections,” Business Insider, 5 October 2019. 

[6] Franklin Foer, “Facebook’s war on free will,” The Guardian, 19 September 2017. 

[7] Deyan Georgiev, “111+ Google Statistics and Facts That Reveal Everything About the Tech Giant,” Review, Vol. 42, 18 January 2022.

[8] Jason Wise, “Meta Platforms Inc Statistics 2022: Revenue, Users, Acquisitions & Shares,” EarthWeb, 17 February 2022.

[9] David Hughes, “What is the A-level algorithm? How the Ofqual’s grade calculation worked – and its effect on 2020 results explained,” iNews, 17 August 2020.

[10] Roger Taylor, Is the Algorithm working for us?, Centre for Progressive Policy, (June 2021). 

[11] A. Aneesh, Virtual Migration: The Programming of Globalization (Duke University Press, 2006).

[12] Daria Gritsenko and Matthew Wood, “Algorithmic Governance: A modes of governance approach,” Regulation & Governance, (2020).

[13] Matthew B. Crawford, “Algorithmic Governance and Political Legitimacy,” American Affairs, 20 May 2019. 

[14] Malcolm Campbell-Verduynet.al., “Big Data and algorithmic governance: the case of financial practices,” New Political Economy, (2017).

[15] Stuart Russell, Human Compatible: AI and the Problem of Control (Penguin Books Ltd., 2019).

CONTRIBUTOR
Zeynep Engin
Zeynep Engin

Dr. Zeynep Engin is a Senior Research Associate at UCL Computer Science (UK); Founder & Director at Data for Policy CIC; Editor-in-Chief for Data & Policy (published by Cambridge University Press). 

Foreword Brazil, Russia, India, China, and South Africa, or the BRICS nations, are living proof of how power and influence are constantly changing in the world's politics and economy. Redefining their positions within the global system and laying the groundwork for a multilateral world order that aims to challenge the traditional dominance of Western economies and institutions, the BRICS countries have...
STAY CONNECTED
SIGN UP FOR NEWSLETTER
PARTNERS