Domestic frontier AI regulation, an IAEA for AI, an NPT for AI, and a US-led Allied Public-Private Partnership for AI: Four institutions for governing and developing frontier AI

📅 2025-07-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current global AI governance suffers from institutional gaps, weak international coordination, and inequitable benefit distribution. Method: This paper proposes a novel institutional framework anchored on computational capacity (“compute”) as a governance lever. First, it designs a global AI governance mechanism—inspired by the International Atomic Energy Agency—comprising a chip non-proliferation agreement and an alliance-led public-private mega-project. Second, it develops integrated techno-policy tools: a domestic compute-indexed risk assessment system, real-time data center monitoring, model release gatekeeping, and export control–linked certification. Third, it advocates dynamic alignment between national regulations and evolving international norms. Contribution: The paper pioneers the first systematic, compute-access–based incentive architecture for global AI governance, reconciling security, peaceful development, and equitable benefit-sharing. It offers a pragmatic, incremental pathway toward effective multilateral AI governance.

Technology Category

Application Category

📝 Abstract
Compute governance can underpin international institutions for the governance of frontier AI. To demonstrate this I explore four institutions for governing and developing frontier AI. Next steps for compute-indexed domestic frontier AI regulation could include risk assessments and pre-approvals, data centre usage reports, and release gate regulation. Domestic regimes could be harmonized and monitored through an International AI Agency - an International Atomic Energy Agency (IAEA) for AI. This could be backed up by a Secure Chips Agreement - a Non-Proliferation Treaty (NPT) for AI. This would be a non-proliferation regime for advanced chips, building on the chip export controls - states that do not have an IAIA-certified frontier regulation regime would not be allowed to import advanced chips. Frontier training runs could be carried out by a megaproject between the USA and its allies - a US-led Allied Public-Private Partnership for frontier AI. As a project to develop advanced AI, this could have significant advantages over alternatives led by Big Tech or particular states: it could be more legitimate, secure, safe, non-adversarial, peaceful, and less prone to misuse. For each of these four scenarios, a key incentive for participation is access to the advanced AI chips that are necessary for frontier training runs and large-scale inference. Together, they can create a situation in which governments can be reassured that frontier AI is developed and deployed in a secure manner with misuse minimised and benefits widely shared. Building these institutions may take years or decades, but progress is incremental and evolutionary and the first steps have already been taken.
Problem

Research questions and friction points this paper is trying to address.

Establish international institutions for frontier AI governance
Regulate domestic AI through risk assessments and monitoring
Ensure secure and peaceful development of advanced AI
Innovation

Methods, ideas, or system contributions that make the work stand out.

Compute governance underpins frontier AI institutions
International AI Agency monitors domestic AI regimes
US-led Partnership develops secure advanced AI