🤖 AI Summary
This paper addresses the core governance question: “Who should lead advanced AI evaluations?” It proposes a hybrid audit framework integrating public institutions and private actors. Drawing on audit practices across nine high-risk sectors, it develops a three-step logical model establishing that public authorities must lead safety-critical gray-box and white-box model audits. The study introduces a novel three-dimensional criterion for allocating responsibilities—risk severity, information sensitivity, and verification cost—and quantifies, for the first time, public audit capacity requirements (e.g., professional teams of ~100 experts). Through cross-sectoral case analysis, risk-tiered modeling, and governance capability mapping, it delivers an actionable AI audit governance roadmap. This work provides theoretical grounding and practical guidance for major jurisdictions—including the EU and the U.S.—to establish hybrid audit systems that balance authoritative oversight with operational efficiency.
📝 Abstract
Artificial Intelligence (AI) Safety Institutes and governments worldwide are deciding whether they evaluate and audit advanced AI themselves, support a private auditor ecosystem or do both.
Auditing regimes have been established in a wide range of industry contexts to monitor and evaluate firms’ compliance with regulation. Auditing is a necessary governance tool to understand and manage the risks of a technology. This paper draws from nine such regimes to inform (i) who should audit which parts of advanced AI; and (ii) how much resources, competence and access public bodies may need to audit advanced AI effectively.
First, the effective responsibility distribution between public and private auditors depends heavily on specific industry and audit conditions. On the basis of advanced AI’s risk profile, the sensitivity of information involved in the auditing process, and the high costs of verifying safety and benefit claims of AI Labs, we recommend that public bodies become directly involved in safety critical, especially gray- and white-box, AI model audits. Governance and security audits, which are well-established in other industry contexts, as well as black-box model audits, may be more efficiently provided by a private market of auditors under public oversight.
Secondly, to effectively fulfill their role in advanced AI audits, public bodies need extensive access to models and facilities. Public bodies’ capacity should scale with the industry's risk level, size and market concentration, potentially requiring 100s of employees for auditing in large jurisdictions like the EU or US, like in nuclear safety and life sciences.