Open Problems in Technical AI Governance

📅 2024-07-20
🏛️ arXiv.org
📈 Citations: 19
Influential: 0
📄 PDF
🤖 AI Summary
Rapid AI advancement poses novel governance challenges, necessitating a rigorous, technically grounded approach to AI governance. Method: This work introduces “technical AI governance” as a distinct paradigm and establishes the first interdisciplinary analytical framework—integrating AI safety, mechanism design, policy modeling, and governance theory—to systematically address three core problem domains: risk identification, evaluation of intervention effectiveness, and compliance mechanism design. Adopting a problem-driven methodology, it clarifies how technical tools can concretely support governance practice. Contributions/Results: (1) A formal, structured definition of technical AI governance and a taxonomy of its core problems; (2) The first publicly available, extensible open-problems catalog for technical AI governance, bridging methodological gaps between technical and policy communities; and (3) An actionable, problem-oriented investment guide for researchers and funding agencies to prioritize high-impact technical governance research.

Technology Category

Application Category

📝 Abstract
AI progress is creating a growing range of risks and opportunities, but it is often unclear how they should be navigated. In many cases, the barriers and uncertainties faced are at least partly technical. Technical AI governance, referring to technical analysis and tools for supporting the effective governance of AI, seeks to address such challenges. It can help to (a) identify areas where intervention is needed, (b) identify and assess the efficacy of potential governance actions, and (c) enhance governance options by designing mechanisms for enforcement, incentivization, or compliance. In this paper, we explain what technical AI governance is, why it is important, and present a taxonomy and incomplete catalog of its open problems. This paper is intended as a resource for technical researchers or research funders looking to contribute to AI governance.
Problem

Research questions and friction points this paper is trying to address.

Address technical barriers in AI governance
Identify and assess effective governance actions
Develop mechanisms for AI enforcement and compliance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Technical analysis for AI governance support
Identify intervention areas and assess actions
Design enforcement and compliance mechanisms
🔎 Similar Papers
No similar papers found.
Anka Reuel
Anka Reuel
CS Ph.D. Candidate, Stanford University
AI GovernanceResponsible AIAI EthicsAI Safety
Ben Bucknall
Ben Bucknall
DPhil Student, University of Oxford
Stephen Casper
Stephen Casper
PhD student, MIT
AI safetyAI responsibilityred-teamingrobustnessauditing
T
Tim Fist
Institute for Progress & Center for a New American Security
Lisa Soder
Lisa Soder
London School of Economics
O
Onni Aarne
Institute for AI Policy and Strategy
Lewis Hammond
Lewis Hammond
University of Oxford
Artificial IntelligenceMachine LearningGame TheoryFormal VerificationAI Safety
Lujain Ibrahim
Lujain Ibrahim
University of Oxford
human-AI interactionevaluationssocietal impact of AIsociotechnical AI
Alan Chan
Alan Chan
Centre for the Governance of AI
AI safetyAI governance
P
Peter Wills
Centre for the Governance of AI & University of Oxford
Markus Anderljung
Markus Anderljung
Centre for the Governance of AI
AI governanceAI policyAI forecasting
Ben Garfinkel
Ben Garfinkel
Director, Centre for the Governance of AI; Research Fellow, University of Oxford
International RelationsAI GovernanceEthics
Lennart Heim
Lennart Heim
RAND, GovAI
AI GovernanceAI and ComputeAI Policy
Andrew Trask
Andrew Trask
University of Oxford and OpenMined
Deep LearningDifferential PrivacySecure Multi-Party ComputationFederated LearningNatural Language Processing
Gabriel Mukobi
Gabriel Mukobi
Stanford University
Rylan Schaeffer
Rylan Schaeffer
Stanford University
artificial intelligencemachine learningcomputational neuroscience
Mauricio Baker
Mauricio Baker
Technology and Security Policy Fellow, RAND
AI PolicyAI and Compute
Sara Hooker
Sara Hooker
Head of Cohere For AI
Machine learning efficiencyrobustnessinterpretabilitytrustworthy ML
Irene Solaiman
Irene Solaiman
Hugging Face
artificial intelligence
A
Alexandra Sasha Luccioni
Hugging Face
Nitarshan Rajkumar
Nitarshan Rajkumar
University of Cambridge
Artificial General Intelligence
N
Nicolas Moes
The Future Society
Neel Guha
Neel Guha
Stanford Computer Science, Stanford Law
artificial intelligencemachine learningtortscivil procedureregulatory policy
J
Jessica Newman
University of California, Berkeley
Yoshua Bengio
Yoshua Bengio
Professor of computer science, University of Montreal, Mila, IVADO, CIFAR
Machine learningdeep learningartificial intelligence
Tobin South
Tobin South
Massachusetts Institute of Technology
A
Alex Pentland
Stanford HAI
J
Jeffrey Ladish
Palisade Research
Sanmi Koyejo
Sanmi Koyejo
Assistant Professor, Stanford University
Machine LearningHealthcare AINeuroinformatics
Mykel J. Kochenderfer
Mykel J. Kochenderfer
Associate Professor, Stanford University
Artificial IntelligenceMachine LearningDecision TheorySafety
Robert Trager
Robert Trager
University of Oxford
AI GovernanceDiplomacyInstitutional DesignSocial TheoryApplied Mathematics