π€ AI Summary
This study addresses the urgent governance gap exposed by the militarization of large reasoning models, where researchers lack effective means to prevent harmful deployments. It introduces the concept of institutional veto power into AI governance for the first time, drawing on precedents from nuclear non-proliferation and biomedical ethics to design an enforceable governance framework led by communities most vulnerable to militarized AI. By identifying critical veto points across the model development lifecycle, the paper proposes embedded, institutionally anchored mechanisms resistant to political manipulation. This approach shifts AI governance from symbolic safeguards toward substantive accountability, offering a viable pathway toward AI βdisarmament.β
π Abstract
The persistent militarization of large reasoning models stems not from technical necessity but from governance arrangements that strip researchers of meaningful authority to refuse harmful transfers and deployments. Existing accountability mechanisms such as model cards and responsible AI statements operate as reputational signals detached from decision making architecture. We identify institutional veto power as a missing governance primitive: a formal authority to halt subsequent use or distribution of research when credible risks of weaponization emerge. Drawing on precedents in nuclear nonproliferation and biomedical ethics, the paper maps unprotected veto points across the research lifecycle, diagnose why compliance without enforceable constraints fails, and offer concrete institutional designs that embed veto authority while reducing the risk of political capture. The paper argues that communities most vulnerable to military uses must lead governance design, and that institutional veto power is a prerequisite for converting symbolic safeguards into enforceable responsibility and for achieving meaningful model disarmament.