A US-Led AI Treaty: The Winning Move
The race to superintelligence is a race off a cliff. The only winning move is to pause.
The United States and China are racing off a cliff. Each superpower is hellbent on building ever-more-powerful AI systems to gain an advantage over the other, culminating in the creation of superintelligence – AI more intelligent than human beings altogether. In fact, building superintelligence is the stated goal of several AI companies.
The problem is that we don’t know how to control superintelligent AI. Hundreds of the world’s leading experts agree that it could escape human control and kill us all. Surveys of several thousand AI researchers have found average estimates of superhuman AI killing off the human race at roughly 1 in 6 – literal Russian Roulette odds; some place their odds far higher.
The US must immediately negotiate an AI Treaty with China banning superintelligence before our fates are sealed.
Provisions of an AI Treaty
An AI Treaty must include a few basic, common-sense provisions:
Pause Superintelligent AI Development
Immediately pause all development of superintelligent AI. This can be done in several ways, such as setting a cap on the size of the largest AI training runs with AI model development forbidden past this “danger threshold.”
International AI Safety Agency
Create an international AI safety agency, staffed by AI experts. This agency will be responsible for establishing standards on AI training and deployment, determining and regularly updating “red lines” around AI progress, and supporting research into safe AI development.
Verification & Enforcement
Establish strong verification and enforcement mechanisms to prevent rogue actors from building superintelligence. A wide range of candidate mechanisms exist, including tracking the sales of hyper-specialized AI chips, detecting unauthorized large-scale AI training runs via energy monitoring, and establishing on-chip reporting.
Other Treaty Proposals
These proposals all differ, but any one of them would be incalculably better than our current path to ruin. The biggest challenge at this point is not coming up with new treaty proposals, but building the political will to make a treaty possible.
Enforcing an AI Treaty
A treaty banning superintelligence is only as strong as our ability to enforce it. Luckily, several candidate mechanisms exist to prevent superhuman AI development:
Track Hardware Sales
Track the sales of GPUs and other hardware that can be used for training human-level AI. This is feasible because frontier AI models take hundreds of millions of dollars to train and rely on hyper-specialized hardware that passes through several supply-chain “choke points.” An analogy: AI chips are to models as uranium mining is to nuclear weapons – difficult to acquire and therefore easy to regulate.
Detect Unauthorized Training Runs
Detect unauthorized large-scale AI training runs by remote sensing of thermal and visual signatures and energy monitoring to detect massive amounts of power consumption. The highest-risk AI training runs are highly visible, generate heat, and require substantial amounts of power, producing large-scale perturbations in the electrical grid.
On-Chip Reporting
Establish on-chip hardware reporting mechanisms within specialized AI chips to detect when such hardware is being used in an unauthorized training run. NVIDIA has already established firmware-based reporting to detect use of its chips in cryptocurrency mining. Similar protocols could be applied to AI training runs.
Getting China to Cooperate
The United States and China don’t agree on much. But if there’s one thing we should agree on, it’s that we don’t want to die.
Superintelligent AI is not a strategic technology to advance the interests of any nation. It is a doomsday device that will destroy us all, regardless of who develops it first. There is no reward to being the first country to create it.
Every government – whether democratic or dictatorial – cares about maintaining stability and preventing existential threats to its power. The last thing that any government would want is a rogue AI, smarter than all government officials, that escapes their control and launches a coup. And yet this is exactly what could happen if superintelligence were built.
Many Chinese insiders are aware of the extreme risks of superintelligence. China’s National Technical Committee released its AI Safety Governance Framework, identifying the risk of “AI becoming uncontrollable in the future.” Several Chinese AI researchers – including Yi Zeng, member of the UN High-level Advisory Body on AI – have called for a global ban on superintelligence.
Moreover, China has signaled interest in working with the US to avert AI catastrophe. Their ambassador to the US warned that building AI could “open Pandora’s Box” and supports closer collaboration. In 2024, the US and China agreed not to put AI in charge of nuclear weapons.
We need not assume that the Chinese government is at all well-intentioned – just that they’re not suicidal. Any regime acting out of self interest should wish to prevent loss of control to superintelligence.
Historical Precedent for an AI Treaty
We are alive today because rival superpowers in the past have recognized that it’s a bad idea to destroy the world.
The US and the Soviet Union were staunch adversaries. But in 1985, after scientists including Carl Sagan warned of nuclear winter, President Reagan wrote dozens of letters to Soviet leader Mikhail Gorbachev on the subject. They held a diplomatic summit and negotiated a treaty banning intermediate-range ballistic missiles.
— Reagan and Gorbachev, 1985
The same is true for building superhuman AI.
Our future is not written in stone. We’ve stopped the world from ending before – let’s do it again.
An AI Treaty Enjoys Broad Support
Experts, public figures, and everyday citizens all support an international ban on superintelligence.
In 2025, a global statement was published calling for a prohibition on the development of superintelligence. This statement was signed by (among 130,000+ others):
- Geoffrey Hinton and Yoshua Bengio, the “godfathers of AI”
- Several members of the US national security community, including a former National Security advisor and the former Chairman of the Joint Chiefs of Staff
- Several former US congressmembers of both parties
- Former European heads of state
- Members of the UK Parliament and the House of Lords
- Leading AI researchers in China
Despite the vast differences between many signatories, all agree on the basics: we don’t want to be killed, and we want to keep the future human.
The American public is behind similar efforts. 63% of Americans support a ban on smarter-than-human AI, and a plurality of both Democrats and Republicans support an international treaty to ban any smarter-than-human AI.
Demand a Global AI Treaty
The US should lead negotiations on a global AI treaty to ban superintelligent AI systems until they can be made safe.
Contact Your RepresentativeUnderstanding the AI Crisis
Go back to the full educational resources page to learn about why AI is dangerous.