🤖 AI Summary
This paper examines the underappreciated geopolitical risks of the AGI arms race, arguing that national efforts to accelerate AGI development exacerbate catastrophic risks—including AI misalignment and nuclear instability—while simultaneously undermining the efficacy of technical safety research; conversely, anticipated strategic benefits (e.g., decisive advantage) are overestimated, and “victory” does not guarantee effective control over adversaries. Methodologically, the study integrates geopolitical analysis, security-oriented game theory, and quantitative risk assessment models, bridging AI governance and international relations theory. Its principal contribution is a novel “cooperation–deterrence” dual-track framework: replacing zero-sum competition with multilateral coordination mechanisms, complemented by verifiable restraint commitments and credible countermeasure guarantees. Empirically grounded and institutionally feasible, this paradigm significantly reduces systemic risk without impeding core technological advancement, offering a pragmatic and security-enhancing pathway for global AI governance.
📝 Abstract
AGI Racing is the view that it is in the self-interest of major actors in AI development, especially powerful nations, to accelerate their frontier AI development to build highly capable AI, especially artificial general intelligence (AGI), before competitors have a chance. We argue against AGI Racing. First, the downsides of racing to AGI are much higher than portrayed by this view. Racing to AGI would substantially increase catastrophic risks from AI, including nuclear instability, and undermine the prospects of technical AI safety research to be effective. Second, the expected benefits of racing may be lower than proponents of AGI Racing hold. In particular, it is questionable whether winning the race enables complete domination over losers. Third, international cooperation and coordination, and perhaps carefully crafted deterrence measures, constitute viable alternatives to racing to AGI which have much smaller risks and promise to deliver most of the benefits that racing to AGI is supposed to provide. Hence, racing to AGI is not in anyone's self-interest as other actions, particularly incentivizing and seeking international cooperation around AI issues, are preferable.