Independent and cutting-edge analysis on global affairs

AI has now become a very powerful tool that has the potential to change virtually everything around us. What makes it particularly powerful is that it can change the way we think and believe, which means that it has the power of changing our very identity. Users who are connected to social media such as Facebook are constantly bombarded with information tailor made by sophisticated algorithms to keep them on the site for as long as possible. It is also possible to build up a filter bubble around the user so that they are shielded from the kind of information they don’t like and instead surrounded by information that they already agree with. This tremendous power of AI thus makes it a real force not only as a tool for business corporations, but also in government and politics as well. Governments are realizing that AI has the power to accomplish their goals and are thus competing against one another to develop and deploy it. Governments are, therefore, following up on Russian President Vladimir Putin’s claim that the country that takes the lead in AI will be the ruler of the world.[1]  

Examples of how governments are using AI to accomplish their goals abound. China is indisputably one of the two world leaders in AI, the other of course being the United States. Chinese citizens cannot realistically get by in a day without their ubiquitous mobile phones, which they use for everything from making small payments to listening to music and much of everything else. The story of social credit system is by now rather well known.[2] The idea is that AI is used to analyze the data generated by the citizens for the purpose of influencing their behavior. Opponents of the system claim that this is an example of a paternalistic society where the authorities try to shape up how their citizens think and behave, a micromanagement of the minute details of the lives of each citizen; however, proponents claim that this has been effective in generating the level of trust outside one’s own immediate circles of friends and relatives, the level that is necessary for society to function smoothly. In any case, the social credit system clearly underscores the tremendous power of AI to transform the behavior and belief of the people in a way that was totally inconceivable just a few years ago.

The question has assumed a lot of importance now that AI has moved to the very top of the agenda of almost every government in the world. On the one hand, the threat of AI toward the very survival of democracy seems to be everywhere.

The power of AI thus gives rise to the question of how AI is related to democratic values. Is AI inherently inimical to democratic values? Or could a way be found to develop AI in such a way that it does provide support for democracy? The question has assumed a lot of importance now that AI has moved to the very top of the agenda of almost every government in the world. On the one hand, the threat of AI toward the very survival of democracy seems to be everywhere. China is the main competitor with the West not only in terms of economics and projection of political power, but also in the realm of philosophical and ethical values. On the other hand, a few, including myself, believe that AI could also be a force for social good, which certainly involves inculcating democratic values. In my country, Thailand, a contest has been going on for more than a decade between those who want to follow China’s lead and those who want to remain with the West. It is conceivable that AI could play a key role in both camps.  

On the surface it seems that AI itself belongs to the antidemocracy camp. The well documented uses of AI to boost the agenda of the government in China and the use of AI by giant American corporations in what Shoshana Zuboff calls ‘surveillance capitalism’ clearly attest to the likelihood of AI being a tool to create and maintain the kind of power that appears to run against democratic values.[3] Zuboff herself calls for democracy itself to fight against the threat to it from surveillance capitalism. Furthermore, there is a fear that democracy will be eroded by the fact that AI is increasingly being used to make choices for human beings.[4] Once we relegate our choices to algorithms, our democracy will certainly be undermined as the power will be shifted toward the algorithm and those who design and put forward those algorithms for us. According to Matthias Risse, AI and democracy are not “natural allies” and ways must be found to undertake the difficult task of channeling the power of AI to serve democracy.[5] Dirk Helbing and his team argue that human beings need to make the right decisions now regarding AI; otherwise, the consequence could be dangerous.[6] Eleni Christodoulou and Kalypso Iordanou, in addition, claim that “[t]he use of Big Data and AI in digital media are often incongruent with fundamental democratic principles and human rights.[7] The dominant paradigm is one of covert exploitation, erosion of individual agency and autonomy, and a sheer lack of transparency and accountability, reminiscent of authoritarian dynamics rather than of a digital well-being with equal and active participation of informed citizens.” Visa Berisha claims that AI is “a threat to democracy” as it can be used to disrupt democratic processes such as elections.[8] Looking from a religious perspective, Peter Hershock claims that the environment created by AI is a “predicament” resulting in loss of individual freedom which certainly is necessary for democracy.[9] The list of warnings that AI is posing a serious threat democracy goes on and on.

However, some of these scholars also claim in effect that the link between AI and the threat to democracy is not deterministic. That is, it is not necessarily the case that AI will always lead to an erosion of democracy and democratic values. After claiming that AI will undermine democracy by making choices for us, Amy Webb continues with her recommendation that there should be a global alliance where people from around the world get together and deliberate on the global norms for AI in an inclusive manner. Christodoulou and Iordanou also claim that identifying key ethical issues in AI serves as a necessary first step toward responsible innovation and protection of democratic ideals.

Perhaps a reason why these scholars are so suspicious of AI is that they have faced with the real examples of AI being used to bolster authoritarian regimes or to enhance the power of giant multinational corporations. Imagining a totally different path for AI is indeed a difficult challenge. Nonetheless, there is nothing inherent in AI itself that prevents it from being a force for democracy. As an information technology, AI is neutral toward the content of the information or the data it processes. It is only the ways in which contemporary AI is mostly used—to bolster the power of political regimes and giant corporations—that seem to lead to the belief that AI is a threat to democracy. But if we imagine a different path, one that has nothing to do with either governments or giant corporations, AI might not seem to be so threatening after all.

For example, let us imagine an alternative scenario. Instead of being a tool for authoritarian governments and giant corporations, AI lies in the hands of the people. Instead of being owned by governments or corporations, the power of AI belongs to the ordinary people. This sounds far-fetched, but there is nothing in the technology itself that prevents it from being realized. The only factors that can prevent its realization are economic and political one, but these do not seem to be insurmountable in principle. When AI belongs to the people, it is likely that it will be used to promote democratic values such as rule of law and respect for human dignity and rights. Certainly, mechanisms must be in place to safeguard the rights, as it is also possible that some groups might use the power of AI to gain an upper hand over other groups. These mechanisms contribute to the kind of environment that needs to be in place for AI to become an ally to democratic values. Part of these mechanisms include global norms and regulations on the use of AI and effective means of enforcing them.

A challenge to this proposal is that contemporary AI needs a lot of data for it to work, and the management of such huge amount of data require a strong form of organization, which seems to be put it out of reach of ordinary people. This is a valid objection. However, it is obvious that individual persons cannot create and operate AI algorithms on such a large scale on their own; so, one way of addressing this is that there must then be an agency that represents the people and acts on their behalf, and this agency then conducts research and development on AI as well as makes sure that the technology really belongs to the people and is devoted to their interests. Another way is that we need effective laws and regulations on how corporations operate so that the gap of power between them and the people are reduced. In the same way, effective laws must also be in place to rein in the government’s power. These two ways complement each other, and they both need to exist. The agency mentioned earlier could be a public organization such as the university or research organization, one with the task of developing the technology but without commercial interests. Such agency, or agencies, must exist hand in hand with the rules and regulations that monitor and oversee how business corporations work. 

What we need to do to ensure that AI is in line with our democratic values is to ensure that these rights are actively promoted and respected. We can look at some more specific examples. AI is now being used increasingly in the law. AI can be used to speed up the process of justice, such as when it is used to analyze legal documents, performing discovery of useful information from thousands of pages of legal documents, and conducting legal research on relevant cases. This kind of processes can take up a lot of time if done in the traditional way by human lawyers and their assistants; however, as it is a repetitive and algorithmic process, AI is well-suited to the task and can speed up the process tremendously. As a legal maxim goes, justice delayed is justice denied.

Another form of the use of AI in law is more advanced; this involves AI acting as a judge in the court of law themselves. This has been put in place in China since 2017. Millions of cases in China are now handled by AI algorithms, according to Tara Vasdani in an article on the website LexisNexis.ca.[10] According to Vasdani, these ‘smart court’ includes non-human judges, powered by artificial intelligence (AI) and allows participants to register their cases online and resolve their matters via a digital court hearing.” She also mentions that other countries such as Estonia also have AI judges in place too in order to settle small claim disputes. The practice can also be found in other countries too.

There is a lot to be said about AI judges. How can we be certain that justice will be served? The question points us to one of the most serious problems facing AI today, the problem of bias in algorithms.

There is a lot to be said about AI judges. How can we be certain that justice will be served? The question points us to one of the most serious problems facing AI today, the problem of bias in algorithms. We can imagine that in cases that are simple and do not involve many legal and interpretive complications, the role of the judge can be little more than algorithmic. The judge compares the fact of the matter with precedents and the letter of the law and acts accordingly. At this level it is conceivable at least that a good AI algorithm can do the job. In any case, there must be mechanisms to make sure that the process is indeed free from any bias. 

Here we can focus our attention on the process in which the AI judge software, or any AI algorithm applied to the real world for that matter, is designed and programmed. Apart from the rules and regulations mentioned earlier, what is also needed is that the engineers who work on AI from the ground up be aware of and sensitive to any possible bias which might occur when their software manipulates the data. Organizations in which the engineers work need to have in place internal monitoring and auditing system which help the programmers and engineers learn about identifying and preventing any biases that could occur in their programs. The process works in the same way as when programmers are trained to spot errors, or “bugs,” in their programs. Throughout the process of building up the codes for a piece of software, programmers constantly need to look for those bugs. The biases that come with a piece of AI might not be as easy to identify as the bugs, but that is only because programmers are typically not trained to spot them. It is not enough just to let the algorithm loose in the wild, so to speak, and learn one’s lessons later. 

Microsoft’s Tay chatbot is a clear example. In 2016, Microsoft released Tay, a chatbot assuming the personality of a young teenage girl on Twitter.[11] The bot could engage in conversation and could churn out tweets just like a normal teen would do. However, after a few weeks Tay started to send out tweets with the kind of language that its original programmers could perhaps never have dreamed of. Tay started to tweet in racist and sexist language filled with hate speech. It was clear that the bot picked up the language from what it found all over Twitter. Soon afterwards Tay was taken offline by Microsoft. The incidence showed that an AI-powered chatbot has the capacity to learn to talk from its own environment, but what it has not learned at all was the ability to discern hate speech and racist language and how to avoid them in the first place. This is the kind of “bugs” that I argue the programmers would need to be able to identify and squash as soon as they are found. For this to be possible, the programmers would need to instill in the software some kind of ethical ability. This is analogous to a young child when she is learning how to become a member of her society, a process which necessitates that she also hones her ethical ability skills. And for this to happen, the programmers themselves need to hone their ethical ability skills too.   

The agency that I mentioned earlier can also contribute to AI for democracy by researching and developing AI that is trained on data that is imbued with democratic values from the beginning. For example, the algorithm could be trained to spot sexual inequality in daily lives and warn users accordingly. People often live in a society or a culture in which sexual or gender inequality is ingrained so deeply that even a well-intentioned person might fail to identify it when it actually occurs. The AI can then help correct for such biases in society by identifying instances of gender inequality that could occur, thereby educating the users and guiding them toward a more equitable society. In a way this is to produce a “biased” algorithm as it is “biased” toward identifying inequality, but it’s a good bias in that it is aimed at promoting democratic values. 

There are various ways in which programmers can be trained so that they build AI which is sensitive to democratic values rather than the opposite. One way is for them to have the opportunity to learn about the larger society rather than the narrow specialty that they need to study to become programmers. In this case they will be able see the bigger picture and come to understand their roles and their responsibilities. Another thing is that the company or organization they work in must be internal mechanisms to ensure that their employees look out for “ethical bugs” together with the normal bugs that they are trained to spot. Furthermore, I don’t think internal rules inside the corporation are not enough. There must be a system of legal mechanisms that make sure that corporations and other organizations indeed follow the same set of norms. This system must also extend to the international level because the scope of many giant corporations does span across the globe. If these conditions are in place, then it should be possible that AI can contribute substantially to democracy and its associated values, and that it is not inherently an enemy of democracy as many seem to think.


[1] James Vincent, "Putin says the nation that leads in AI 'will be the ruler of the world'," The Verge, 4 September 2017, https://www.theverge.com/2017/9/4/16251226/russia-ai-putin-rule-the-world

[2] Katie Canales, "China's 'Social Credit' System Ranks Citizens and Punishes them with Throttled Internet Speeds and Flight Bans if the Communist Party deems them Untrustworthy," Business Insider, 24 December 2021, https://www.businessinsider.com/china-social-credit-system-punishments-and-rewards-explained-2018-4

[3] Shoshana Zuboff, The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power (Public Affairs, 2019).

[4] Karen Hao, "Why AI is a Threat to Democracy - and What We can do to Stop It," MIT Technology Review, (26 February 2019), https://www.technologyreview.com/2019/02/26/66043/why-ai-is-a-threat-to-democracyand-what-we-can-do-to-stop-it/

[5] Mathias Risse, Artificial Intelligence and the Past, Present and Future of Democracy, CARR Center for Human Rights Policy, (26 July 2021), https://carrcenter.hks.harvard.edu/files/cchr/files/ai-and-democracy

[6] Dirk Helbing et al., "Will Democracy Survive Big Data and Artificial Intelligence?," Scientific American, (25 February 2017), https://www.scientificamerican.com/article/will-democracy-survive-big-data-and-artificial-intelligence/

[7] Eleni Christodoulou and Kalypso Iordanou, "Democracy Under Attack: Challenges of Addressing Ethical Issues of AI and Big Data for More Democratic Digital Media and Societies," Frontiers in Political Science, Vol. 3 (July 2021).https://www.frontiersin.org/articles/10.3389/fpos.2021.682945/full

[8] Visar Berisha, AI as a Threat to Democracy: Towards an Empirically Grounded Theory, Uppsala Universitet Independent Thesis MA, (2017). uu.diva-portal.org/smash/record.jsf?pid=diva2%3A1179633&dswid=-3564

[9] Peter D. Hershock, Buddhism and Intelligent Technology: Toward a More Humane Future (Bloomsbury, 2021).

[10] Tara Vasdani, "Robot Justice: China's Use of Internet Courts," LexisNexis, (February 2020), https://www.lexisnexis.ca/en-ca/ihc/2020-02/robot-justice-chinas-use-of-internet-courts.page

[11] Oscar Schwartz, "In 2016, Microsoft's Racist Chatbot Revealed the Dangers of Online Conversation," IEEE Spectrum, 25 November. 2019, https://spectrum.ieee.org/in-2016-microsofts-racist-chatbot-revealed-the-dangers-of-online-conversation#toggle-gdpr

CONTRIBUTOR
Soraj Hongladarom
Soraj Hongladarom

Soraj Hongladarom is a Professor of Philosophy and Director of the Center for Science, Technology, and Society at Chulalongkorn University in Bangkok, Thailand. 

Foreword The rapid pace of geopolitical change, the urgent necessity for sustainability, and the fundamental importance of energy security converge to shape our complex global landscape today. This issue of Transatlantic Policy Quarterly delves into "Change, Security, and Sustainability in Energy," offering insights from scholars and professionals on how regions and nations are navigating this...
STAY CONNECTED
SIGN UP FOR NEWSLETTER
PARTNERS