International AI Safety Report 2026

πŸ“… 2026-02-24
πŸ“ˆ Citations: 6
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This study addresses the capabilities, emerging risks, and safety of Artificial General Intelligence (AGI) systems through a systematic, internationally coordinated assessment aimed at informing global AI governance. Initiated by 29 countries and international organizations and involving over 100 interdisciplinary experts, it establishes the first multinational, cross-disciplinary framework for evaluating AI safety. By integrating expert consensus, comprehensive literature review, and multidimensional risk analysis, the project synthesizes technical, policy, and ethical perspectives to produce an authoritative international report on AI safety. This report serves as a foundational reference for national regulatory policymaking and fosters the development of a shared global understanding of AGI-related risks.

Technology Category

Application Category

πŸ“ Abstract
The International AI Safety Report 2026 synthesises the current scientific evidence on the capabilities, emerging risks, and safety of general-purpose AI systems. The report series was mandated by the nations attending the AI Safety Summit in Bletchley, UK. 29 nations, the UN, the OECD, and the EU each nominated a representative to the report's Expert Advisory Panel. Over 100 AI experts contributed, representing diverse perspectives and disciplines. Led by the Report's Chair, these independent experts collectively had full discretion over the report's content.
Problem

Research questions and friction points this paper is trying to address.

AI safety
general-purpose AI
emerging risks
scientific evidence
international collaboration
Innovation

Methods, ideas, or system contributions that make the work stand out.

AI safety
international collaboration
expert consensus
general-purpose AI
risk assessment
Yoshua Bengio
Yoshua Bengio
Professor of computer science, University of Montreal, Mila, IVADO, CIFAR
Machine learningdeep learningartificial intelligence
S
Stephen Clare
Carina Prunkl
Carina Prunkl
Ethics Institute, Utrecht University
Ethics of AIGovernance of AIPhilosophy of Science and TechnologyPhilosophy of Physics
Maksym Andriushchenko
Maksym Andriushchenko
ELLIS Institute TΓΌbingen & Max Planck Institute for Intelligent Systems
AI SafetyAI AlignmentLLMsLLM agents
Ben Bucknall
Ben Bucknall
DPhil Student, University of Oxford
M
Malcolm Murray
Rishi Bommasani
Rishi Bommasani
CS PhD, Stanford University
Societal Impact of AIAI PolicyAI GovernanceFoundation Models
Stephen Casper
Stephen Casper
PhD student, MIT
AI safetyAI responsibilityred-teamingrobustnessauditing
Tom Davidson
Tom Davidson
Ghent University
FPGArun-time reconfiguration
R
Raymond Douglas
David Duvenaud
David Duvenaud
Associate Professor, University of Toronto
LLM EvalsDifferential EquationsApproximate Inference
P
Philip Fox
Usman Gohar
Usman Gohar
Iowa State University
machine learningartificial intelligencefairness in machine learningsoftware engineering
R
Rose Hadshar
Anson Ho
Anson Ho
Epoch AI
AIDeep LearningQuantitative MethodsAI Safety
Tiancheng Hu
Tiancheng Hu
University of Cambridge
natural language processingcomputational social science
C
Cameron Jones
Sayash Kapoor
Sayash Kapoor
CS PhD, Princeton University
ReproducibilityAI agentsSocietal impacts
Atoosa Kasirzadeh
Atoosa Kasirzadeh
Carnegie Mellon University
AI EthicsAI GovernancePhilosophyMathematical Optimization
Sam Manning
Sam Manning
Research Fellow, Centre for the Governance of AI
development economicseconomic impacts of artificial intelligence
Nestor Maslej
Nestor Maslej
Stanford University, The Stanford Institute for Human-Centered Artificial Intelligence
Artificial Intelligence
Vasilios Mavroudis
Vasilios Mavroudis
Research Scientist, Alan Turing Institute
Machine LearningSystems SecurityArtificial Intelligence
C
Conor McGlynn
Richard Moulange
Richard Moulange
University of Cambridge
J
Jessica Newman