🤖 AI Summary
This study addresses a critical gap in current AI governance, which predominantly emphasizes substantive rules while neglecting the legal and regulatory infrastructure necessary for their generation and implementation. For the first time, this work systematically positions legal infrastructure as the cornerstone of effective AI governance and proposes an institutional framework comprising a frontier model registration system, an autonomous agent identification mechanism, and a market-oriented regulatory service model. Through rigorous legal design, regulatory modeling, and policy mechanism analysis, the research delivers an actionable institutional pathway that significantly enhances the flexibility, scalability, and enforcement efficacy of AI governance rules.
📝 Abstract
Most of our AI governance efforts focus on substance: what rules do we want in place? What limits or checks do we want to impose on AI development and deployment? But a key role for law is not only to establish substantive rules but also to establish legal and regulatory infrastructure to generate and implement rules. The transformative nature of AI calls especially for attention to building legal and regulatory frameworks. In this PNAS Perspective piece I review three examples I have proposed: the creation of registration regimes for frontier models; the creation of registration and identification regimes for autonomous agents; and the design of regulatory markets to facilitate a role for private companies to innovate and deliver AI regulatory services.