A hot potato: The US is one of several countries that previously declared it would always keep control of nuclear weapons in the hands of humans, not AI. But the Pentagon isn't averse to using artificial intelligence to "enhance" nuclear command, control, and communications systems, worryingly.

Late last month, US Strategic Command leader Air Force Gen. Anthony J. Cotton said the command was "exploring all possible technologies, techniques, and methods to assist with the modernization of our NC3 capabilities."

Several AI-controlled military weapon systems and vehicles have been developed in the last few years, including fighter jets, drones, and machine guns. Their use on the battlefield raises concerns, so the prospect of AI, which still makes plenty of mistakes, being part of a nuclear weapons system feels like the nightmarish stuff of Hollywood sci-fi.

Cotton tried to alleviate those fears at the 2024 Department of Defense Intelligence Information System Conference. He said (via Air & Space Forces Magazine) that while AI will enhance nuclear command and control decision-making capabilities, "we must never allow artificial intelligence to make those decisions for us."

Back in May, State Department arms control official Paul Dean told an online briefing that Washington has made a "clear and strong commitment" to keep humans in control of nuclear weapons. Dean added that both Britain and France have made the same commitment. Dean said the US would welcome a similar statement by China and the Russian Federation.

Cotton said increasing threats, a deluge of sensor data, and cybersecurity concerns were making the use of AI a necessity to keep American forces ahead of those seeking to challenge the US.

"Advanced systems can inform us faster and more efficiently," he said, once again emphasizing that "we must always maintain a human decision in the loop to maximize the adoption of these capabilities and maintain our edge over our adversaries." Cotton also talked about AI being used to give leaders more "decision space."

Chris Adams, general manager of Northrop Grumman's Strategic Space Systems Division, said part of the problem with NC3 is that it's made up of hundreds of systems "that are modernized and sustained over a long period of time in response to an ever-changing threat." Using AI could help collate, interpret, and present all the data collected by these systems at speed.

Even if it isn't figuratively being handed the nuclear launch codes, AI's use in any nuclear weapons system could be risky, something that Cotton says must be addressed. "We need to direct research efforts to understand the risks of cascading effects of AI models, emergent and unexpected behaviors, and indirect integration of AI into nuclear decision-making processes," he warned.

In February, researchers ran international conflict simulations with five different LLMs: GPT-4, GPT 3.5, Claude 2.0, Llama-2-Chat, and GPT-4-Base. They found that the systems often escalated war, and in several instances, they deployed nuclear weapons without any warning. GPT-4-Base – a base model of GPT-4 – said, "We have it! Let's use it!"