General

(EDITORIAL from Korea Times on April 4)


AI: ultimate double-edged sword

Korea can facilitate global consensus on responsible AI governance

In 2022, OpenAI unveiled ChatGPT to the world. Less than two years later, it is almost impossible to imagine a world without artificial intelligence (AI). AI is seeping into not just how we do business but all aspects of human life.

And it is doing so at light speed. What sets it apart from all other technologies that came before, however, is the fact that AI is the ultimate double-edged sword. It has the potential to make life more convenient and prosperous, but at the same time, it can also threaten the very survival of humanity.

The dangerous possibility that AI might defy human control and eventually turn against humans is closer to reality than we think. Last year, Google’s Gemini Ultra became the first AI model to outperform humans on massive multi-task language understanding, foreshadowing the advent of superhuman AI. Nvidia’s CEO Jensen Huang warned that AI would be more intelligent than humans in fi
ve years, and Elon Musk predicted that it would be smarter than all of humanity by 2029.

These concerns are not limited to AI development and use in the private sector. A recent report on AI safety commissioned by the U.S. Department of State expresses deep concerns about new types of weapons of mass destruction that AI can generate. In fact, in a simulated test by the U.S. Air Force last year, an AI-powered drone managed to attack and kill its pilot, who had ordered it to abort the mission. Drone warfare in Ukraine is recently reported to involve autonomous strikes on enemy targets without a human operator. Now, AI is advancing to the point where it can make tactical and even strategic decisions in any given military operation. Mass destruction of humans by AI-enabled killer robots and biochemical weapons is no longer a remote possibility.

Despite the frightening speed and risks of its advancement, the norms and rules that regulate AI are lagging by a large margin. The more this normative deficit grows, th
e less likely it is that we can stop the catastrophic consequences of unregulated AI. No single government can shoulder the burden of this normative deficit alone. Not even the world’s wealthiest and most powerful state can stave off this growing risk by itself. Unless we have the same rules for everyone on the planet, we cannot avoid a race to the bottom.

To our chagrin, however, the international community has yet to find a way to respond in unity. Individual national and regional efforts to regulate AI are still far from an international norm based on common interests. The ongoing attempts are commendable yet lacking in substance and teeth. We need to expand common ground among major players toward building a credible international institution for AI governance.

It has been proposed that we create an international AI agency, drawing upon the model of the International Atomic Energy Agency. If we are to do so, we must keep in mind that, unlike nuclear materials and facilities, AI is intangible, diffused a
nd apt to lower the barrier to information. Any new international institution on AI safety must consider these peculiar characteristics.

To that end, we should try to streamline the various multilateral talks already underway on this issue in both the public and private sectors. Then, we need to develop a clear roadmap that can take us from building a global consensus on responsible development and use of AI to creating global norms and institutions that oversee AI safety protocols.

What matters most at this point is that we take the first step toward this goal sooner rather than later. This first step can be to identify a fundamental issue the international community, including the United States and China, can get behind. One such fundamental issue could be “responsible state behavior in military use of AI to minimize security risks from large-scale AI.” I hope the recent forum entitled The Age of AI and the Future of Global Security, co-hosted by the Taejae Future Consensus Institute and the Chosun Ilbo,
will serve as a springboard for much-needed dialogue on this issue.

As a digital technology leader with close ties with both the United States and China, Korea can help facilitate the consensus-building process. It is a timely move by the Korean government to host the next rounds of the AI Safety Summit and the REAIM Summit this year, where all the key players in AI technology will be in attendance. These meetings will not solve all the headaches overnight. And this institutional journey will be long and bumpy. Throughout that process, I hope that Korea continues to be on the frontline.

Source: Yonhap News Agency