AI, the spectre of decisive advantage, and international conflict
A reading list
Introduction
A growing body of work identifies the following challenge: Countries might worry that a rival’s AI development will significantly shift the geopolitical balance of power, perhaps even to the extent that the rival becomes world-dominating. As a result, countries might be willing to launch preemptive attacks on their adversaries to avoid losing power.
In this piece, we collate the most relevant work describing this potential dynamic and possible solutions.
Within each section, we order sources by roughly how important and relevant we think they are. The list is up-to-date as of April 2025, though we might update it in future.
We leave a few related things out of scope:
Other risks associated with advanced AI and international competition. For example, if states race each other to develop advanced AI, they could be incentivized to underinvest on safety and security measures, increasing the likelihood of severe accidents.1
International relations literature that doesn’t focus on AI—though there is a lot of literature about the general topic of changes in the balance of international power.
Unpublished work.2
Before we start, some definitions of terms that are frequently used in this topic. (Individual sources might use these terms slightly differently.)
An actor is said to have a decisive strategic advantage (DSA) if they could confidently defeat the rest of the world combined (see e.g. Aschenbrenner’s Situational Awareness for an argument that AI could cause a DSA).3
An artificial superintelligence (ASI) is a system that significantly outperforms the best humans at all strategically important tasks, such as scientific R&D, military strategy, general reasoning, persuasion, and long-term planning.4
Artificial general intelligence (AGI) is similar, but only needs to be similar to the best humans, not significantly better.
1 Overview of the challenge
This section is for readings that primarily introduce the issue. They sometimes also discuss possible solutions.
Superintelligence Strategy (Hendrycks, Schmidt, and Wang 2025, 41 pages)
Argues that a) superintelligence will lead to a decisive strategic advantage, and relevant actors will know this, b) an actor that sees itself as losing a race to superintelligence would prefer to risk a war than suffer a DSA, c) an actor that is winning a race to superintelligence would prefer a negotiated settlement than risking war. Therefore, the stable equilibrium is for the US and China to negotiate to constrain each other’s AI development.
Op-ed versions in Time and The Economist and an interview with the lead author on ChinaTalk.
See also: Helpful responses/critiques from RAND, Zvi Mowshowitz, and Wildeford and Delaney. Also a concrete scenario of how these events could play out from David Abecassis.
Schmidt had earlier expressed related ideas, such as in an interview with Noema.
The Manhattan Trap: Why a Race to Artificial Superintelligence is Self-Defeating (Katzke and Futerman 2024, 26 pages)
Argues that under defensive realism states are incentivised to not race to ASI because a) this would risk war (see Superintelligence Strategy), b) racing increases the probability of misaligned takeover from rogue AI systems, and c) this could cause massive concentration of power and the erosion of liberal democracy.
To avoid this, we need international cooperation to ensure only one central actor pursues frontier AI development, at a slow and safe pace.
AI takeoff and nuclear war (Cotton-Barratt 2024)
Currently, the global order strongly disincentivises great power conflict, significantly due to the threat of nuclear escalation and retaliation. If new AI-enabled technological breakthroughs change the balance between nuclear offense and defense (e.g. missile defense systems), this could make war more likely. Similarly, if one country is approaching a DSA, this makes a preemptive war more likely.
One proposed solution is to create strong commitment mechanisms that force even an actor with a DSA to adhere to pre-DSA treaties to respect other nations’ sovereignty, possibly with AI systems themselves serving as treaty enforcers.
The Rival AI Deployment Problem: a Pre-deployment Agreement as the least-bad response (Belfield 2022)
If a country thinks its adversary is about to achieve ASI and/or a DSA, they face three options: a) acquiesce, and hope their interests are not fully ruined; b) threaten, or actually carry out, economic, cyber, or physical attacks to prevent the rival from reaching ASI; c) broker an agreement on power sharing in a post-ASI world (that is break the link from ASI to DSA).
Clearly option c) is best, but it may be hard to achieve given low trust levels.
Artificial General Intelligence's Five Hard National Security Problems | RAND (Mitre, and Pred 2025, 19 pages)
Most relevant is problem 1 - AGI may be used to design and build a ‘wonder weapon’ that gives its wielder a DSA - and problem 5 - the period around the creation of AGI may be especially strategically uncertain and unstable, increasing the risk of conflict.
Carl Shulman on the economy and national security after AGI (Part 1) | 80,000 Hours (Wiblin 2024, 4 hours)
AGI will lead to the automation of ~all remote work, and trigger explosive economic and industrial growth. This could also lead to a DSA if one country’s lead expands dramatically after they reach AGI/ASI.
Strategic Insights from Simulation Gaming of AI Race Dynamics (Gruetzemacher 2024, 41 pages)
Across many iterations of an AGI tabletop roleplaying game, laggard countries frequently escalated to military action when they feared an adversary was about to achieve a DSA.
2 Possible solutions
The basic scenario described above is that 1) a single actor reaches ASI first, 2) this actor uses ASI to achieve a DSA, and 3) the interests of other actors are trampled on thereafter. Researchers have suggested interventions focusing on each of these steps.
2.1 Preventing unilateral ASI
International control of advanced AI would prevent any one country from reaching ASI with a clear lead. This might prevent individual countries from acquiring a DSA. Several papers have discussed international control of AI, though generally with a focus on reducing a different kind of competitive dynamic; competition might contribute to disastrous accidents by creating incentives to cut corners.
Multinational AGI Consortium (MAGIC): A Proposal for International Coordination on AI (Hausenloy, Miotti, and Dennis 2023, 8 pages)
MAGIC would have a monopoly on frontier AI development, thereby reducing competitive dynamics.
International AI institutions: A literature review of models, examples, and proposals (Maas and Villalobos 2023, 42 pages)
“Model 6: International joint research” is relevant here. The authors cite parallels with international projects such as CERN and ITER.
International Control of Powerful Technology: Lessons from the Baruch Plan for Nuclear Weapons | GovAI (Zaidi and Dafoe 2021, 72 pages)
The US’s Baruch plan of 1946 proposed to transfer their nuclear weapons to the UN, alongside preventing any other nation from developing nuclear weapons. The authors draw lessons from this for potential internationalised control of AGI.5
2.2 Avoiding AI systems that would grant a DSA
This section is about the leading AI power refraining from, and/or credibly committing not to, develop AI systems that would give a DSA. As above, such restraint might be motivated not just for geopolitical reasons but also to reduce accident risks—existing literature is primarily framed in terms of accident risk.
What AI systems would be less likely to grant a DSA than the systems that would be built by default? Existing ideas include systems that are…
Less capable in general. For example, due to limits on the amount of compute used for training models (Miotti & Wasil, 2023).
Less capable in domains that are particularly relevant to DSAs, such as developing weapons of mass destruction.
It might be possible to develop AI systems that are generally capable but weak in specific domains, such as by excluding some information from their training data or implementing “unlearning.” See, for example, Hendrycks et al. (2024).
Less likely to comply with requests to act on a DSA. We highlight some examples below (though note that these are very speculative):
AI systems that follow a constitution that would make them refuse requests to violate other countries’ sovereignty. We are not aware of literature about the geopolitical effects of AI constitutions.
Aligning AI systems to principles that would be inconsistent with acting out a DSA, such as following international law. See Gabriel (2020) for discussion of what the goal of alignment could be.
The sources below discuss verification and enforcement mechanisms that would be relevant for international agreements about AI—such measures might be needed to ensure compliance with agreements to avoid certain kinds of AI.
Hardware-enabled governance mechanisms (Kulp et al. 2024, 81 pages) and Interim report: Mechanisms for flexible hardware-enabled guarantees (Petrie et al. 2024, 34 pages)
Both reports discuss how to embed hardware mechanisms in AI chips that provide assurances about what sort of workloads are being run, where the chip is, or even prevent some types of AI training runs from being done using the chip. More speculatively, hardware mechanisms could perhaps be used to provide higher-level guarantees about how a model would behave.
Mechanisms to verify international agreements about AI development (Scher & Thiergart 2024, 148 pages)
Ideal verification regimes will require the development of new technologies (see below), but even existing low-tech approaches such as physical inspections of AI infrastructure can provide some assurances about how compute is being deployed.
Verification methods for international AI agreements (Wasil et al. 2024, 17 pages)
Proposes various verification methods, including physical inspections of AI datacentres, companies, and supply chains; traditional intelligence gathering and analysis; and on-chip governance mechanisms.
International AI institutions: A literature review of models, examples, and proposals (Maas, and Villalobos 2023, 42 pages)
“Model 4: Enforcement of standards or restrictions” is relevant here, citing parallels with international watchdogs and verification regimes such as the IAEA and biological and chemical weapons conventions.
2.3 Preventing lagging countries from being trampled
Even if one actor were to achieve a DSA, there may be ways to improve outcomes for other countries. The readings below focus on two mechanisms for this: confidence-building measures and benefit sharing.
Confidence-Building Measures (CBMs)
CBMs could aim to make laggard countries justifiably less worried about the intentions and plans of a leading country that may be on track for a DSA. That said, the existing proposals for CBMs for AI are mostly targeted at AI development and geopolitical tension scenarios that are lower stakes. They might not be sufficient for the kind of extreme scenario described above.
Decoding Intentions: Artificial Intelligence and Costly Signals | CSET (Imbrie, Daniels, and Toner 2023, 66 pages)
Countries should make costly signals of good intentions, such as a) public commitments that would be reputationally costly to walk back, b) investing in safety early, which cannot be undone, and c) extensive reporting of compute usage.
Confidence-Building Measures for Artificial Intelligence: A Multilateral Perspective | UNIDIR (Puscas 2024, 37 pages)
CBMs are “planned procedures to prevent hostilities, to avert escalation, to reduce military tension, and to build mutual trust between countries”. They usually involve increasing transparency into domestic dual-use activities, and promoting cooperation between countries on mitigating technological risks. CBMs have been used in previous treaties to do with cyber, space, and biological warfare.6
Confidence-Building Measures for Artificial Intelligence: Workshop Proceedings (Shoker et al. 2023, 22 pages)
This workshop, co-convened by OpenAI and UC Berkeley, proposed CBMs in the following categories: a) crisis hotlines, b) incident sharing mechanisms, c) transparency via model and system cards, d) content provenance via watermarking, e) collaborative red teaming and table-top exercises, and f) sharing datasets and model evaluations.
AI and International Stability: Risks and Confidence-Building Measures | CNAS (Horowitz and Scharre 2021, 27 pages)
CBMs in the context of military AI applications are distinct from arms control agreements, as the latter are centrally about limiting the development and deployment of certain weapons. CBMs are generally softer and focus on promoting dialogue, transparency and norms.
Benefit sharing
Even if one country were to have DSA, this country might diffuse many of the benefits from ASI to the rest of the world. If countries could credibly commit to doing so, this might make other countries less concerns about them obtaining a DSA.
Sharing the AI Windfall: A Strategic Approach to International Benefit-Sharing (Justen 2024)
Distinguishes between sharing AI benefits that do not significantly empower the recipient, and ‘power sharing’ with recipients to voluntarily give up or not pursue a DSA. (Our note: If enough power is shared, this could actually prevent the creation of a DSA, which would then make this belong in section 2.2.)
Sharing benefits and power with allies and adversaries could be used as bargaining chips in exchange for the other parties not racing with the leader on frontier AI development.
Options and Motivations for International AI Benefit Sharing | GovAI (Dennis et al. 2025, 55 pages)
Three main approaches to benefit-sharing are: a) sharing the inputs to AI development, such as data, compute, and algorithmic insights; b) providing access to frontier or near-frontier AI systems via APIs or open-weight models; c) distributing financial returns from AI.
Machines of Loving Grace (Amodei 2024, section 4)
The US could dissuade other countries from challenging its AI dominance partly by offering carrots, such as access to benefits from advanced AI. This could be somewhat analogous to the Cold War era ‘Atoms for Peace’ program of distributing civilian nuclear technology to bolster the US’s diplomatic standing.
Askell et al. discuss this dynamic but focus on competition between AI companies rather than competition between states.
Though if you reach out to us, we might be able to discuss non-published work.
This is sometimes called a decisive military advantage. Alternate definitions focus on controlling a large majority of the world economy.
Definitions vary on whether being superhuman at physical dexterity is also required.
Forethought provide another historical case study in the early international control of communication satellites (MacAskill and Hadshar 2025).


