Public vs Private Bodies: Who Should Run Advanced AI Evaluations and Audits? A Three-Step Logic Based on Case Studies of High-Risk Industries

📅 2024-07-30
🏛️ AAAI/ACM Conference on AI, Ethics, and Society
📈 Citations: 2
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the core governance question: “Who should lead advanced AI evaluations?” It proposes a hybrid audit framework integrating public institutions and private actors. Drawing on audit practices across nine high-risk sectors, it develops a three-step logical model establishing that public authorities must lead safety-critical gray-box and white-box model audits. The study introduces a novel three-dimensional criterion for allocating responsibilities—risk severity, information sensitivity, and verification cost—and quantifies, for the first time, public audit capacity requirements (e.g., professional teams of ~100 experts). Through cross-sectoral case analysis, risk-tiered modeling, and governance capability mapping, it delivers an actionable AI audit governance roadmap. This work provides theoretical grounding and practical guidance for major jurisdictions—including the EU and the U.S.—to establish hybrid audit systems that balance authoritative oversight with operational efficiency.

Technology Category

Application Category

📝 Abstract
Artificial Intelligence (AI) Safety Institutes and governments worldwide are deciding whether they evaluate and audit advanced AI themselves, support a private auditor ecosystem or do both. Auditing regimes have been established in a wide range of industry contexts to monitor and evaluate firms’ compliance with regulation. Auditing is a necessary governance tool to understand and manage the risks of a technology. This paper draws from nine such regimes to inform (i) who should audit which parts of advanced AI; and (ii) how much resources, competence and access public bodies may need to audit advanced AI effectively. First, the effective responsibility distribution between public and private auditors depends heavily on specific industry and audit conditions. On the basis of advanced AI’s risk profile, the sensitivity of information involved in the auditing process, and the high costs of verifying safety and benefit claims of AI Labs, we recommend that public bodies become directly involved in safety critical, especially gray- and white-box, AI model audits. Governance and security audits, which are well-established in other industry contexts, as well as black-box model audits, may be more efficiently provided by a private market of auditors under public oversight. Secondly, to effectively fulfill their role in advanced AI audits, public bodies need extensive access to models and facilities. Public bodies’ capacity should scale with the industry's risk level, size and market concentration, potentially requiring 100s of employees for auditing in large jurisdictions like the EU or US, like in nuclear safety and life sciences.
Problem

Research questions and friction points this paper is trying to address.

Determine roles for public vs private advanced AI evaluators
Assess required public capacity for effective AI evaluations
Balance evaluation efficiency with security and governance needs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Public-private collaboration in AI evaluations
Public oversight for critical AI model audits
Scalable public capacity for AI risk management
🔎 Similar Papers
No similar papers found.
M
Merlin Stein
University of Oxford
M
Milan Gandhi
University of Oxford
T
Theresa Kriecherbauer
University of Oxford
A
Amin Oueslati
Hertie School
Robert Trager
Robert Trager
University of Oxford
AI GovernanceDiplomacyInstitutional DesignSocial TheoryApplied Mathematics