Preventing AI Nuclear Armageddon
MELISSA PARKE | It is no longer science fiction: the race to apply artificial intelligence to nuclear-weapons systems is underway – a development that could make nuclear war more likely.
By Melissa Parke*
It is no longer science fiction: the race to apply artificial intelligence to nuclear-weapons systems is underway – a development that could make nuclear war more likely. With governments worldwide acting to ensure the safe development and application of AI, there is an opportunity to mitigate this danger. But if world leaders are to seize it, they must first recognize just how serious the threat is.
In recent weeks, the G7 agreed on the Hiroshima Process International Code of Conduct for Organizations Developing Advanced AI Systems, in order to “to promote safe, secure, and trustworthy AI worldwide,” and US President Joe Bidenissued an executive order establishing new standards for AI “safety and security.” The United Kingdom also hosted the first global AI Safety Summit, with the goal of ensuring that the technology is developed in a “safe and responsible” manner.
But none of these initiatives adequately addresses the risks posed by the application of AI to nuclear weapons. Both the G7 code of conduct and Biden’s executive order refer only in passing to the need to protect populations from AI-generated chemical, biological, and nuclear threats. And UK Prime Minister Rishi Sunak did not mention the acute threat posed by nuclear-weapons-related AI applications at all, even as he declared that a shared understanding of the risks posed by AI had been reached at the AI Safety Summit.
No one doubts the existential risks posed by the use of nuclear weapons, which would wreak untold devastation on humanity and the planet. Even a regional nuclear war would kill hundreds of thousands of people directly, while leading to significant indirect suffering and death. The resulting climatic changes alone would threaten billions with starvation.
Nuclear history is rife with near-misses. All too often, Armageddon was averted by a human who chose to trust their own judgment, rather than blindly follow the information provided by machines. In 1983, the Soviet officer Stanislav Petrov received an alarm from the early-warning satellite system he was monitoring: American nuclear missiles had been detected heading toward the Soviet Union. But rather than immediately alert his superiors, surely triggering nuclear “retaliation,” Petrov rightly determined that it was a false alarm.
Would Petrov have made the same call – or even had the opportunity to do so – if AI had been involved? In fact, applying machine learning to nuclear weapons will reduce human control over decisions to deploy them.
Of course, a growing number of command, control, and communications tasks have been automated since nuclear weapons were invented. But, as machine learning advances, the process whereby advanced machines make decisions is becoming increasingly opaque – what is known as AI’s “black box problem.” This makes it difficult for humans to monitor a machine’s functioning, let alone determine whether it has been compromised, is malfunctioning, or has been programmed in such a way that could lead to illegal or unintentional outcomes.
Simply ensuring that a human makes the final launch decision would not be enough to mitigate these risks. As psychologist John Hawley concluded in a 2017 study, “Humans are very poor at meeting the monitoring and intervention demands imposed by supervisory control.”
Moreover, as Princeton University’s Program on Science and Global Security showed in 2020, leaders’ decision-making processes in a nuclear crisis are already very rushed. Even if AI is merely used in sensors and targeting, rather than to make launch decisions – it will shorten the already tight timescale for deciding whether to strike. The added pressure on leaders will increase the risk of miscalculation or irrational choices.
Yet another risk arises from the use of AI in satellite and other intelligence-detection systems: this will make it more difficult to hide nuclear weapons, such as ballistic-missile submarines, that have historically been concealed. This could spur nuclear-armed countries to deploy all their nuclear weapons earlier in a conflict – before their adversaries get a chance to immobilize known nuclear systems.
So far, no initiative – from Biden’s executive order to the G7’s code of conduct – has gone beyond a voluntary commitment to ensure that humans retain control of nuclear-weapons decision-making. But, as United Nations Secretary-General António Guterres has noted, a legally binding treaty banning “lethal autonomous weapons systems” is crucial.
While such a treaty is a necessary first step, however, much more needs to be done. When it comes to nuclear weapons, trying to anticipate, mitigate, or regulate the new risks created by emerging technologies will never be enough. We must remove these weapons from the equation entirely.
This means that all governments mustcommit to stigmatize, prohibit, and eliminate nuclear weapons by joining the Treaty on the Prohibition of Nuclear Weapons, which offers a clear path toward a world without such arms. It also means that nuclear-armed states must immediately stop investing in modernizing and expanding their nuclear arsenals, including in the name of making them “safe” or “secure” from cyberattacks. Given the insurmountable risks posed by the mere existence of nuclear weapons, such efforts are fundamentally futile.
We know that autonomous systems may lower the threshold to engage in armed conflict. When applied to nuclear weapons, AI is adding another layer of risk to an already unacceptable level of danger. It is critical that policymakers and the public recognize this, and fight not only to avoid applying AI to nuclear weapons, but to eliminate such weapons entirely.
*Melissa Parke, a former Australian minister for international development, is Executive Director of the International Campaign to Abolish Nuclear Weapons, which received the Nobel Peace Prize in 2017.
Enjoying The Geneva Observer? Join our community of subscribers! We have several plans available, and if you change your mind, you can cancel anytime, no questions asked. Below are recent stories that have been made possible by our readers’ support.
© The Geneva Observer - Project Syndicate