Guest blog on AI Governance: 10 Puzzle Pieces

Javier Surasky is the chair the International Cooperation Department of the International Relations Institute of the Universidad Nacional de La Plata, Argentina. He was the director of research for Cepei between 2015 and 2024. He blogs through Global Rader and has published over fifty books, book chapters, articles, and reports on the 2030 Agenda, South-South Cooperation, the UN reform, and Sustainable Development governance.

Artificial Intelligence (AI) has become an integral part of daily life for most people inhabiting our planet. It has been incorporated into our internet search engines (Google), transportation systems (Uber, Waze, Google Maps), entertainment platforms (Netflix, Disney Channel), and work tools (ChatGPT, Dall-E, Midjourney, Canva). The list is endless and extends to areas that shape our world, from tourism (Despegar, Trivago) to warfare (lethal autonomous weapon systems), including international trade and finance. AI is transforming our societies.

This is not the first time in human history that technology has had disruptive effects on the social order: the invention of the printing press, the steam engine, the use of nuclear energy, or the advent of computers are examples that require no further explanation in a list that we could extend even to stone tools in prehistory.

Each new technological change has proven more potent than its predecessors, as it has developed by standing on their shoulders. This helps explain why each change has generated social fear and adverse reactions: in the early 19th century, the Luddites, a group consisting mainly of English artisans, organized to destroy the "new" machines that threatened their work.

Adaptation to new technologies, due in part to their impact on communications, has historically occurred over increasingly accelerated periods but, until now, has required more than one generation.

These elements place AI on a plane of major technological change: it has a more significant disruptive potential than any previous technology, is occurring at an unprecedented speed, and generates fears and rejections.

These are precisely the reasons why it is essential to establish an AI governance framework: its dual power (it has no inherent purpose so that it can be used both "for good" and "for evil"), its social extension and penetration, and the promotion of peace at both national and international levels.

Of course, these reasons for establishing an AI governance framework can be split into motives of equity, social justice, capacities for sustainable development, closing or preventing gaps between rich and poor states, peace and security, and various other factors. As we noted, the impacts of AI are already felt in multiple fields, and the potential for good or evil it brings is unprecedented. While it creates new risks and opportunities, the biggest change AI brings is enhancing existing ones.

Thinking about an AI governance scheme today implies thinking contextually and holistically, in a framework where uncertainty plays a prominent role that we would be wrong to try to deny. On the contrary, uncertainty is part of the reality that AI enhances.

We must consider many elements in our attempts to provide a legal framework for AI. Below, I present here 10 of them that I understand as critical:

  1.  To establish AI governance, debating and agreeing on its purposes is necessary. All regulation is based on an axiological source (it seeks to "protect" a value considered positive and/or "confront" a value deemed negative). The values that underpin AI governance are neither natural nor exempt from disputes. Therefore, the concept of "AI for Good" seems useless to me. What is "good"? Instead, if we talk about "AI for Sustainable Development," we have an internationally reached agreement on what that means.
  2. Governing AI is nothing more, nor less, than governing the stages of its life cycle. While a simple schematization of technology life cycles can be summarized in six stages (product definition → product development → prototype testing → early user adoption → widespread use → obsolescence), De Silva and Alahakoon identify a 19-stage life cycle for AI.
  3. An AI governance scheme must necessarily include the regulation of data production, storage, management, transmission, and use. Not considering this chapter is equivalent to regulating food consumption without addressing its production.
  4. AI development does not occur in a normative vacuum: there are binding international norms in fields such as property rights, trade, humanitarian law, human rights, and the United Nations Charter that already apply directly to AI.
  5. We are also not facing an institutional vacuum. We already have international experience in creating governance frameworks for disruptive technologies: there are lessons to be learned and good practices that come from institutions of the United Nations system such as the International Telecommunication Union (ITU), the United Nations Office for Outer Space Affairs (UNOOSA), the International Atomic Energy Agency (IAEA), and the International Civil Aviation Organization (ICAO), but also from other institutions such as the International Organization for Standardization (ISO), the Internet Governance Forum (IGF), the World Summit on the Information Society (WSIS), and even from spaces such as the Internet Corporation for Assigned Names and Numbers (ICANN) and the European Organization for Nuclear Research (CERN).
  6. Not only can the institutions mentioned in the previous point provide lessons, but other more "traditional" organizations can provide indications on how to deal with the characteristics specific to AI regulation. For example, the International Labour Organization (ILO) was created in 1919, but its tripartite nature is extremely attractive when thinking about a governance scheme that must necessarily be multi-stakeholder (see the next point).
  7. AI governance must be a multistakeholder governance. While states are the ones who carry normative power, AI development takes place, especially in the private sector, which is why it must be part of the process. Its priorities and demands must be counterbalanced, so it is essential to include actors that provide expert knowledge (academia, think tanks) and those who will feel its final consequences (civil society). By its nature, it is particularly relevant to include an institutional channel that allows the needs of future generations and children to reach the debates.
  8. Any framework for AI governance requires work on three levels: national, regional, and global. By its nature, AI does not recognize geographical limits, and its regulation requires, at the very least, addressing cross-border and interoperability issues.
  9. Establishing a definitive AI governance when it is in full development is a utopia. Instead, we should base ourselves on anticipation exercises (with a high degree of uncertainty) to create a regime capable of being nimbly adapted as new developments occur. It is good to remember here that Thomas Friedman told us in his book Thank You for Being Late: An Optimist's Guide to Thriving (2016) that the speed of change in new technologies could surpass the ability of societies and policymakers to adapt to the changes they generate. More specifically, he pointed out that the renewal rate of technological platforms moved within five to seven years while implementing new regulatory measures required between ten and fifteen. As Collingridge’s Dilemma puts it, when a technology is just developing, it is hard, when not impossible, to predict what impacts that technology will have. Consequently, any regulations imposed at early stages are likely to be ill-fitted, but when those impacts have become known, it is often too late to regulate them.
  10. AI governance must include a substantial chapter on monitoring and reviewing compliance with its application and advances in AI itself, including a rapid dispute resolution scheme based on expert work. Without disregarding their (earnest) shortcomings, the Universal Periodic Review of Human Rights conducted by the United Nations Human Rights Council and the WTO dispute settlement panels present exciting avenues that can be adapted to AI.

Although it may seem difficult to imagine today, reality tells us we need a "Digital San Francisco moment." When what seems impossible is indispensable, it is good to remember Arthur Clarke: "The only way of discovering the limits of the possible is to venture a little way past them into the impossible" (Profiles of the Future: An Inquiry into the Limits of the Possible. Harper & Row, 1962).


Comments

Popular posts from this blog

Alexander Juras is Stakeholder Forum’s New Chairperson

Key Sustainability Dates for 2024

Possible Candidates for the next Secretary General - Amina Mohammed - Part 1