International Scientific Report on the Safety of Advanced AI (Interim Report)

📅 2024-11-05
🏛️ arXiv.org
📈 Citations: 15
Influential: 2
📄 PDF
🤖 AI Summary
This paper addresses multidimensional safety risks posed by Artificial General Intelligence (AGI), focusing on three core categories: capability emergence across tasks, autonomous agentic behavior, and societal-scale impacts. Method: Drawing on the first international scientific consensus report on AI safety—co-authored by experts from 75 countries—the study introduces a novel, independent expert–led, multinational assessment paradigm. It integrates multi-source risk modeling, interdisciplinary Delphi methodology, and policy–technology alignment analysis. Contribution: The work delivers an empirically grounded risk atlas covering all three dimensions, establishing a verifiable, iterative, and cross-domain collaborative framework for AGI risk identification and governance. This constitutes the first authoritative, evidence-based scientific baseline for global AI governance, enabling coordinated, science-informed policymaking and technical intervention.

Technology Category

Application Category

📝 Abstract
This is the interim publication of the first International Scientific Report on the Safety of Advanced AI. The report synthesises the scientific understanding of general-purpose AI -- AI that can perform a wide variety of tasks -- with a focus on understanding and managing its risks. A diverse group of 75 AI experts contributed to this report, including an international Expert Advisory Panel nominated by 30 countries, the EU, and the UN. Led by the Chair, these independent experts collectively had full discretion over the report's content. The final report is available at arXiv:2501.17805
Problem

Research questions and friction points this paper is trying to address.

Understanding risks of general-purpose AI systems
Managing safety concerns in advanced AI technologies
Synthesizing global expert insights on AI risks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Synthesizes scientific understanding of general-purpose AI
Focuses on understanding and managing AI risks
Involves 75 international AI experts collaboration
🔎 Similar Papers
No similar papers found.
Y
Y. Bengio
Sören Mindermann
Sören Mindermann
University of Oxford, OATML
AI safetydeep learningactive learningcausal inferenceCOVID-19
D
Daniel Privitera
T
T. Besiroglu
Rishi Bommasani
Rishi Bommasani
CS PhD, Stanford University
Societal Impact of AIAI PolicyAI GovernanceFoundation Models
Stephen Casper
Stephen Casper
PhD student, MIT
AI safetyAI responsibilityred-teamingrobustnessauditing
Yejin Choi
Yejin Choi
Stanford University / NVIDIA
Natural Language ProcessingDeep LearningArtificial IntelligenceCommonsense Reasoning
D
Danielle Goldfarb
Hoda Heidari
Hoda Heidari
Carnegie Mellon University
Responsible AIAI EthicsAI AccountabilityAlgorithmic FairnessAlgorithmic Economics
L
Leila Khalatbari
Shayne Longpre
Shayne Longpre
MIT, Stanford, Apple
Deep LearningNatural Language Understanding
Vasilios Mavroudis
Vasilios Mavroudis
Research Scientist, Alan Turing Institute
Machine LearningSystems SecurityArtificial Intelligence
Mantas Mazeika
Mantas Mazeika
Center for AI Safety
ML SafetyAI SafetyMachine EthicsML Reliability
K
Kwan Yee Ng
C
Chinasa T. Okolo
D
Deborah Raji
T
Theodora Skeadas
Florian Tramèr
Florian Tramèr
Assistant Professor of Computer Science, ETH Zurich
ML SecurityComputer SecurityCryptographyPrivacy
B
Bayo Adekanmbi
P
Paul F. Christiano
D
David Dalrymple
T
Thomas G. Dietterich
E
Edward Felten
Pascale Fung
Pascale Fung
Dept. of Electronic & Computer Engineering, the Hong Kong University of Science & Technology
artificial intelligenceconversational AIspeech recognitionnatural language processingAI
Pierre-Olivier Gourinchas
Pierre-Olivier Gourinchas
Nick Jennings
Nick Jennings
Vice-Chancellor and President, Loughborough University
AIArtificial IntelligenceMulti-Agent SystemsIntelligent Agentsmultiagent systems
A
Andreas Krause
Percy Liang
Percy Liang
Associate Professor of Computer Science, Stanford University
machine learningnatural language processing
T
T. Ludermir
V
Vidushi Marda
Helen Margetts
Helen Margetts
Professor of Society and the Internet, University of Oxford
Political SciencePublic PolicyCollective ActionDigital GovernmentPublic Administration
J
J. McDermid
A
Arvind Narayanan
A
Alondra Nelson
Alice Oh
Alice Oh
KAIST Computer Science
machine learningNLPcomputational social science
G
Gopal Ramchurn
Stuart Russell
Stuart Russell
Macquarie University
Critical criminologylegal theoryhuman rightsrefugeesBrazil
M
Marietje Schaake
Dawn Song
Dawn Song
Professor of Computer Science, UC Berkeley
Computer Security and Privacy
Alvaro Soto
Alvaro Soto
Professor Universidad Catolica de Chile
Machine learningcomputer visionrobotics
L
Lee Tiedrich
G
G. Varoquaux
A
Andrew Yao
Y
Ya-Qin Zhang