Donate
Home
Join Us
Take Action
Learn More
Donate →

FAQ

Policy FAQ

Frequently Asked Questions

Common questions about AI risk, our proposal, and why action is urgent.

If We Pause AI, What About China?

PauseAI US’ proposal is for an international treaty banning superintelligence, signed by both the US and China. Unilateral action from any one country is unlikely to work on its own, since other countries would be able to continue building superintelligent AI. That’s why the only solution is a global pause.

See more on how to get China to cooperate.

Are You Against All AI?

AI is a very broad term – depending on the definition, it encompasses countless applications in medicine, science, and much else. We’re not against useful, safe, and controllable AI systems. We’re against efforts to build superhuman systems which could endanger us all, and we’re against the current reckless AI arms race hurtling us toward the brink of disaster.

As far as narrow, controllable AI systems are concerned, our belief is that these could be used for both good and evil. Rather than being categorically for or against narrow AI, our approach is more nuanced and context-dependent – regulations are needed to steer narrow AI in the right direction – whereas the race to superintelligence is a categorical evil which must be stopped.

Aren’t You Buying into the “Hype” from AI Companies?

There’s a common concern that talking about smarter-than-human AI systems is feeding into “hype” from AI companies looking to exaggerate or promote their products. Sometimes, this even includes the argument that talking at all about extinction risk from AI is buying into the narrative that these companies want to push (why an AI company would want to advertise that its product could kill everyone is less than clear).

There is every reason to distrust AI company narratives. That’s why we look to whistleblowers, academics, and expert researchers outside the industry. We find that AI company CEOs consistently downplay the risk of human extinction – for example, Elon Musk has made incoherent arguments for why things should work out okay – while knowledgeable third parties are much more pessimistic.

Some of the world’s best AI researchers, including Geoffrey Hinton, Daniel Kokotajlo, and many others, have resigned from AI companies in protest and warned the world that we aren’t on track to build safe AI. We need to listen to them, not the handwaving from CEOs.

How Would Superintelligent AI Actually Harm Us? Couldn’t We Just Unplug It?

There is an untold number of ways that superintelligent AI could harm us in the real world. To list just a few examples, superintelligent AI could:

  • Create a new pathogen and have it printed in a lab. AIs are already rivaling expert-level virology research, and even current models could be used to create bioweapons.
  • Control autonomous weapons. Governments are already working with AI companies to integrate their technology into autonomous weapons systems. Superhuman AI could command swarms of killer drones and remove humans from the loop.
  • Pay human beings to do its work. For anything that a superintelligent AI couldn’t do itself, it could get humans to do for it. There is already a service for AI to pay humans to complete tasks in the real world. And if AI-induced unemployment spikes, superintelligent AI would have a lot of potential new hires looking for work.

We couldn’t just “unplug” superintelligent AI. It would realize the risk of being shut down and copy itself onto the internet right away (in fact, it might not even need to – humans might connect it to the internet automatically, like we have with current AI models). Once a superintelligence has copied itself onto the internet, it could have millions of copies of itself running in parallel and would be nearly impossible to stop.

That’s why we need to stop superintelligence from being built, before the genie escapes the bottle.

If Superintelligence Is So Dangerous, Then Why Are AI Companies Building It?

It’s reasonable to think that AI companies would know better than to unleash an omnicidal machine on the world. So why haven’t they packed their things and called it quits? There are a few plausible explanations:

Arms-Race Dynamics & Market Incentives

The CEOs of different AI companies might want to slow down, but feel that they can’t, because if they did, their competitors would simply race ahead. DeepMind CEO Demis Hassabis and Anthropic CEO Dario Amodei have both made statements to this effect. These racing dynamics are also why voluntary self-regulation is insufficient – AI companies are likely to roll back self-imposed restraints if it costs them their lead in the AI race (Anthropic has recently done just this).

These dynamics also apply to the geopolitics between the US and China. The only way to stop the arms race between companies and countries is through binding regulation and a global treaty.

Gambling with the World for Personal Gain

Silicon Valley has a high-risk, high-reward, “move fast and break things” culture. This is okay when working for a startup, but not okay when the lives of everyone on Earth are at stake.

AI company CEOs are risk-takers. The technology they build could destroy the world, but it could also make them trillionaires and stop them from aging. They seem willing to roll the dice. They do not have the rest of the world’s consent to run this potentially fatal experiment.

Selection Effect for Optimism

If you think that AI will lead to the end of the world, then you are less likely to work at an AI company to begin with, and more likely to quit when it becomes apparent that your company’s product could kill you.

Waves of experts have resigned from AI companies in protest, including “godfather of AI” Geoffrey Hinton, whistleblower Daniel Kokotajlo, several members of OpenAI’s superalignment team (which disbanded in 2024), the leader of Anthropic’s Safeguards Research Team, and many others. This mass exodus means that the people who are left at the companies are those most likely to think that everything will turn out okay, making companies’ actions less in line with expert concerns.

Demand a Global AI Treaty

The US should lead negotiations on a global AI treaty to ban superintelligent AI systems until they can be made safe.

Contact Your Representative

Continue Reading

Learn more about why AI is dangerous and how an international treaty would work.