Artificial Intelligence (AI) is a popular, widely analyzed, and speculated area of Computer Science that has direct implications in society, through various legal, regulatory, economic, and educational dimensions. With recent developments in the world of technology (including cloud computing, explosion of big data, prolific enhancements in AI algorithms), the cost of developing and distributing AI applications has diminished significantly, democratizing the once expensive and challenging realm of AI throughout the world. As the impact of AI is now visible across different disciplines (examples include law, media, economics, defense, etc.), it is crucial to evaluate, and proactively plan for, a world where AI coexists with human organizations.
Brief Definition of Artificial Intelligence (AI)
What is intelligence and what makes humans intelligent? Can non-living beings demonstrate intelligence; or in other words, is it possible to have “artificial” intelligence, rather than “natural” intelligence”? If the answer was positive, what would a good example of artificial intelligence be? The first philosopher to contemplate a non-living intelligent agent that could answer questions directed to itself was Descartes in 1600s.[1] And while the first computers were designed in the 19th century (the first mechanical computer was created by Charles Babbage in 1822), advancements in machinery and electronics later led to the development of more powerful computers after WW2, and microprocessors and PCs after 1960s and 70s. As the size and cost of computers decreased, their processing capabilities increased constantly and exponentially. Even more important was the development of the Internet, which enabled these devices to connect to each other, scaling their capabilities immensely in 1990s and the 21st century.
Along with these developments, computers have always depended on the concept of an IPO model (Input, Processing, and Output): receiving inputs (e.g., touching a screen, pressing a keyboard button), processing them (e.g., making a calculation, formatting text on an email) and providing outputs (e.g. displaying an image on a screen, writing the calculated number on a disk). How a set of inputs would be “processed” would depend on an algorithm, exactly defined, and explicitly codified by a “programmer”. Example: If the user touches a particular button on the keyboard, do X; if the user touches another button, do Y, and so forth. Interestingly, this IPO model is very similar to how our brains process information. The nervous system of living organisms relies on inputs (e.g. sights, sounds, etc.), processing capabilities (e.g. reading and understanding text) and outputs (memorizing the text or speaking it out), much the same way as how we’ve developed computers. This similarity between living organisms and computers has led to the question of whether computers can act similarly to other intelligent agents such as humans, dolphins, ants, etc., which bears even a further question of what intelligence is. These questions have been answered by various computer scientists, philosophers and neuro-scientists since 1950s.
Due to an explosion of data that resides throughout the internet (including massive data centers that power the cloud as well as smart edge devices), enhancements in graphical processors, the willingness of researchers throughout the world to share their learning algorithms through open-source projects have all led to further enhancements in Machine Learning.
Max Tegmark, in his book “Life 3.0 Being Human in the Age of Artificial Intelligence”, defines intelligence as “the ability to accomplish complex tasks” and breaks this down to three main components: processing, memory and learning.[2] From both a processing power and memory perspectives, even though humans have been one of the most powerful living beings, they have recently been outpowered by computers due to advancements in microprocessors. “Learning”, as a concept, is challenging for computers, though – as noted above, computers have relied on very explicitly written definitions of exactly how they should behave in each instance of running a program. However, because of the complexity of our universe, this exact and explicit definition of all combinations of all objects is very challenging, if not impossible, necessitating “learning” capabilities. Machine Learning (ML), therefore, is a key construct that enables artificial intelligence. Due to an explosion of data that resides throughout the internet (including massive data centers that power the cloud as well as smart edge devices), enhancements in graphical processors, the willingness of researchers throughout the world to share their learning algorithms through open-source projects have all led to further enhancements in Machine Learning. Examples of such learning capabilities include recent announcements on computers reaching human parity in image recognition[3] or speech recognition.[4]
Benefits and Pitfalls of AI
The above-mentioned developments show that it is now possible for computers to interact with humans at a level almost indistinguishable from other humans in some dimensions of cognition, which has created a lot of interest in possible uses of AI. A few examples of positive use cases of AI are given below:
- Image Processing, Detection and Classification: As examples, AI can be used to process streams of videos to classify objects to understand if an autonomous car is faced with a traffic light, or a person, or a cat; or recognize instances that pose threat for employee safety where an employee is not wearing a helmet in a factory. Similarly, AI-based systems can be used to analyze MRI scans of millions of cancer patients, and over time, detect patterns to support radiologists’ ability to detect cancer early.
- Natural Language Processing: AI can be utilized in call centers to understand customers’ complaints or swiftly browse through huge volumes of text generated in judicial systems to gather information for lawyers.
- Forecasting and Optimization: AI can also be used by large retailers or financial institutions to forecast demand for a particular product and guide their customers to optimal products and services. It can be used to predict if a particular customer of a bank is likely to default, guiding the bank and its customer to new products before such a default happens.
Almost all industries, from education to agriculture, pharmaceuticals, or financial services, are prone to using AI in various forms. It would be naïve, though, to assume that it’s all a glorified road to using AI in everyday life. What are potential pitfalls of using AI in an effective manner in all dimensions of life? Just a few points of caution are summarized below:
- Data Quality and Quantity: AI depends heavily on the quality and quantity of data submitted to it, and if left unattended, data in its natural form generally includes significant amounts of quality issues, including biases. To account for these issues, a successful AI implementation includes multiple steps of data cleansing, which typically relies on data scientists who are also succinct subject matter experts in the field they’re analyzing.
- AI Interpretability: Contrary to a simple application that uses a set of explicit calculations, the above examples of image processing or natural language processing generally use deep learning algorithms that rely on multiple layers of complex networks of data, which makes it difficult for developers to understand exactly how those algorithms work.
- Privacy: Due to the amount of data collected for effective AI implementations, it becomes easy for developers of AI to have direct oversight on individuals’ behaviors or characteristics. A good example of privacy considerations is observed when governments or corporations install cameras in the street to detect traffic patterns, but do not take the relevant steps to mask-out or anonymize faces or number plates.
These developments all bear very important questions: As AI-powered systems support (or potentially replace) humans in critical decision-making circles, such as Courts, Security or Healthcare, how can we develop strong, ethical, and accountable mechanisms that protect the privacy of citizens? How can we secure international cooperation on the use and transfer of data, AI algorithms, and their use in decision-making across borders? How would a society that upholds democracy use AI? What kinds of skills need to be developed for effective use of AI within society?
Democratic Values and Responsible AI
Democracy, which can be defined as the peoples’ authority to decide on legislation directly, or elect representatives to decide on such legislation, depends on critical values such as liberty, equality, justice, freedom of speech, voting rights, etc. These values, upheld by individuals or groups of individuals, and protected through explicit codes of laws and regulations, form the basis of decision making at both micro and macro levels, and facilitate inclusive decision making. And if a certain judicial or economic decision, for instance, were to be made by systems empowered by AI, how could we assure individuals that their preferences are included in decision making, while securing their rights to privacy and security?
The answer is likely to arise from a new Social Contract, to be devised among sovereign states, non-governmental organizations, corporations, academia, interest groups and individuals. This social contract would need to account for at least the following principles on the use of AI, which may be summarized as the “principles of Responsible AI”[5]:
Fairness and Inclusiveness
Fairness and inclusiveness are critical tenets of core democratic values such as equality, justice, and freedom of speech. We should expect AI systems to be based on the principle of treating all people fairly, ultimately improving the overall fairness in any judicial, political, legal system today. AI systems can allocate or withhold opportunities or resources based on input provided to them – presenting a possibility of over- or under-representation of certain individuals or groups. An example of under-representation of a social construct is where disadvantaged rural communities have fewer opportunities to access systems using AI, reducing their impact on the decisions driven by such systems. An AI-based decision-making cycle should ensure that the voices of everyone are heard, and that their contributions are provided to the system as relevant inputs.
Reliability and Safety
Democratic sovereign entities aim to provide physical, psychological, and mental safety for all citizens. With that main goal in sight, AI systems should perform reliably and safely; and should be open to constant improvement based on human feedback. An AI project that provides a successful result in one city, country, or social group may not necessarily provide the same level of efficacy in another city, country, or social group. Developers of AI should incorporate a feedback mechanism for humans to constantly update the models used in AI. Similarly, AI applications should be developed with mandates to cease to operate when faced with situations that may result in any harm to humans.
Transparency
Transparency in AI is critical because it enables users of AI to trust the system, and trust is at the core of any democratic, social construct. As stated above among pitfalls of AI, though, AI-interpretability may be hard to achieve due to the technical complexity of algorithms used. Companies who develop AI applications should make concerted efforts in documenting and proactively sharing their methods, opening their code to external communities, and being open about the limitations of their systems.
Accountability
Despite all the complexity of AI applications that can be hard to interpret, humans should be held accountable for the results of AI systems. Accountability also includes structures that are in place to ensure the developers are acting in accordance with the core principles of ethics, rule of law, and democracy. Developers of AI should have clear principles on how they develop, sell, and advocate for their applications; and sovereign entities should have clear guidelines in protecting their citizens against pitfalls.
Privacy and Security
Privacy is a fundamental right that protects citizens against any power that may have oversight on their preferences, actions, or statements. As AI and Machine Learning all rely on using data to model the universe and learn, and that increasing reliance on data adds a new level of complexity, and new requirements for keeping it secure. Developers of AI should have clear boundaries on what they can access as they develop an application, train a model, or use it in real life. As an example, an AI developer who’s working on an application that detects credit card fraud, for example, should not have access to all personally identifiable information of the user, and should only have access to a “masked” set of data. Data origin and lineage, use of data internally or externally, or data corruption considerations should be constantly evaluated by developers and users of AI.
A successful example of international efforts to secure privacy is the General Data Protection Regulation (EU) 2016/679 (GDPR), a regulation in the EU and European Economic Area (EEA) on data protection and privacy. Adopted in 2016, and enforced since 2018, it is a directly binding and applicable regulation that protects the privacy of all citizens of EU and EEA and applies to all enterprises regardless of their origin and data subjects’ citizenship. GDPR has been applied as a role model for many other privacy-protection regulations across the world, including Brazil, Chile, Japan, South Korea, South Africa, Malta, and Turkey. It is based on the idea that personal data may not be processed unless there is at least one legal basis to do so, such as a person’s consent to such processing, or a data controller’s legal obligations.[6]
More recently, the European Commission released a proposal for the regulation of not just Data, but also AI,[7] which aims to “build an ecosystem of excellence in AI and strengthening the EU’s ability to compete globally” while ensuring that “Europeans can trust the AI they are using.” This effort shows that a single nation state or a single governing body is not sufficient to build an environment of trust, and that international cooperation is needed to secure especially privacy, in the age of social media, ubiquitous computing (including smart phones, smart watches, smart buildings, smart cars, etc.).
As listed above, the main pillars of Responsible AI would include at least fairness, inclusiveness, reliability, safety, transparency, privacy, security, and accountability.
As listed above, the main pillars of Responsible AI would include at least fairness, inclusiveness, reliability, safety, transparency, privacy, security, and accountability. For a society to effectively stand on these pillars, all relevant actors that develop and govern AI applications should jointly agree on a contract that endorses these pillars, which, as stated above, can be constructed through a social contract. The government’s role here would be to protect its citizens while maintaining an environment where innovation is possible. The private sector’s role would be to develop applications with clear governance mechanisms to oversee how AI applications adhere to the principles mentioned here. As potential users or developers of AI, government entities should also apply at least the same principles as the private sector, and at least with the same level of openness and trust.
Building an International Environment of Trust in AI
Building an environment of trust in AI requires a social contract, as stated above. However, compared to previous technologies which could be monitored on a national level (such as the TV or telecommunications industries), this new social contract needs to expand through national boundaries as no one state, or country, or company controls the AI ecosystem, which interweaves a convoluted web of hyperscale cloud operators, large scale developer companies, multinational service providers, regional or local telecommunications providers, local or state governments or supranational entities like the EU or NATO.
A widely discussed question of legal responsibility in AI and robotics lays the problem of international collaboration clearly: if an autonomous car has an accident while in a self-driving mode, who would be accountable for penalties that will be incurred (an autonomous car is essentially a robot with advanced AI capabilities)? Part of the answer lies in the Vienna Convention on Road Traffic, an international treaty designed to facilitate international road traffic, which was signed in 1968 and entered into force in 1977.[8] The Vienna Convention acts as the foundation for establishing standard traffic rules among contracting parties, including the fundamental principle that a driver is always fully responsible for the behavior of a vehicle in traffic. If a car is in a self-driving mode, can the driver in one country be held accountable for the decisions of an AI application that could have been developed in another country, and that resides on a chip that was produced in another country? Like the Vienna Convention, nation states have recently come together to discuss potential amendments to the original text of the Vienna Convention.
Similar international conventions and agreements are needed to control how AI systems are developed, deployed, and updated, across different industries and domains, such as medical sciences and public health, law and judicial bodies, mainstream media and social media. Such international agreements need to recognize that corporations that now build AI applications seldom work in a single-country environment, and therefore should protect the rights of citizens of each country within a global context. At the same time, these international agreements should also focus on fostering innovation, rather than stifling all AI applications.
Educating a Society for AI
On one hand, AI poses a threat to uneducated or unqualified masses due to the power of automation it brings on many industries. On the other hand, though, it also provides immense new opportunities for maximizing economic output and improving the lives of everyone. To effectively utilize the benefits of AI while minimizing the potential negative impacts, societies need to take deliberate steps to educate their citizens so they can coexist with AI-based services.
The first stream of thought, when thinking about such education, would include the upskilling of citizens in the areas of computer science or data science, for instance, as potential developers of AI. However, the cost of such education and its prerequisites (such as a strong foundation in mathematics or statistics or the availability of computing for everyone) makes it impractical for most nation states to deploy nationwide for all students and adults.
Therefore, a good second choice would be to educate all citizens in various fundamental skills (but mostly on the social side), which cannot easily be overtaken by computers and AI systems. AI can provide individuals and organizations with huge new cognitive capabilities, but humans (at least for the foreseeable future) have evolved over hundreds of thousands of years to become very effective in social skills and have the upper hand in intuition, abstract and conceptual thinking, and emotional control. 21st century skills development, therefore, should not only concentrate on the current popular areas of STEM (Science, Technology, Engineering, Mathematics), but should also emphasize the development of communication, collaboration, creativity, and critical thinking as well.
Concluding Remarks: AI: Balancing Act
Popular media coverage on AI focuses heavily on its benefits or on its threats to humans, and seldom takes a balanced view on it. Benefits include increasing the effectiveness of diagnosis in healthcare, decreasing the cost of providing financial services to the masses, or reducing the impact of accidents due to a better flow of traffic through autonomous driving. Risks include unjust loan decisions by banks due to biases prevalent in the data, or potential threats on humans by autonomous weapons, or governments using the immense data they collect over their citizens. Taking a single point of view of AI, focusing only on its benefits or its threats, limits the policy makers’ capabilities to shape the right governance mechanism. Policy makers, therefore, need to take into consideration the interests of all parties involved, including citizens, corporations, international bodies, academia, and communities.
Such a balanced view of AI would not be possible without the involvement of multiple parties working together on a common goal. A good example of a corporation taking that balanced view, for instance, would include “Ethics Committees” before the implementation of AI projects, involving key business departments, legal teams, ethics experts, technical teams. Similarly, a government acting through that balanced view would enable startups and corporations to work on AI projects, including on its citizens, but would set up an environment of transparency in data collection, processing and deletion (such as is required in GDPR). An international organization (such as UN, WHO or similar) taking that balanced view would enable the flow of data securely across boundaries, similar to how goods flow among countries. With such a balanced view, based on the pillars of Responsible AI, AI would benefit our societies immensely and would help foster democracy across the world.
[1] Minsoo Kang, The Mechanical Daughter of Rene Descartes: the Origin and History of an Intellectual Fable (Modern Intellectual History, 2017).
[2] Max Tegmark, Life 3.0 Being Human in the Age of Artificial Intelligence (Knopf, 2017).
[3] Kaiming He, Xiangyu Zhang, Shaoqing Ren and Jian Sun, “Deep Residual Learning for Image Recognition,” (2015), https://arxiv.org/abs/1512.03385
[4] W. Xiong, J. Droppo, X. Huang, F. Seide, M. Seltzer, A. Stolcke, D. Yu and G. Zweig, “Achieving Human Parity in Conversational Speech Recognition”, (2016), https://arxiv.org/abs/1610.05256
[5] For further reading on Responsible AI, see “Responsible Use of Technology: The Microsoft Case Study”, (2021).
[6] For further reading on “General Data Protection Regulation”, see https://eur-lex.europa.eu/eli/reg/2016/679/oj
[7] For the European Commission’s Proposal Regulation on AI, see https://digital-strategy.ec.europa.eu/en/library/proposal-regulation-laying-down-harmonised-rules-artificial-intelligence
[8] For Vienna Convention and the standards on international traffic rules, see https://treaties.un.org/doc/Treaties/1977/05/19770524%2000-13%20AM/Ch_XI_B_19.pdf