🤖 AI Summary
This paper addresses multidimensional safety risks posed by Artificial General Intelligence (AGI), focusing on three core categories: capability emergence across tasks, autonomous agentic behavior, and societal-scale impacts.
Method: Drawing on the first international scientific consensus report on AI safety—co-authored by experts from 75 countries—the study introduces a novel, independent expert–led, multinational assessment paradigm. It integrates multi-source risk modeling, interdisciplinary Delphi methodology, and policy–technology alignment analysis.
Contribution: The work delivers an empirically grounded risk atlas covering all three dimensions, establishing a verifiable, iterative, and cross-domain collaborative framework for AGI risk identification and governance. This constitutes the first authoritative, evidence-based scientific baseline for global AI governance, enabling coordinated, science-informed policymaking and technical intervention.
📝 Abstract
This is the interim publication of the first International Scientific Report on the Safety of Advanced AI. The report synthesises the scientific understanding of general-purpose AI -- AI that can perform a wide variety of tasks -- with a focus on understanding and managing its risks. A diverse group of 75 AI experts contributed to this report, including an international Expert Advisory Panel nominated by 30 countries, the EU, and the UN. Led by the Chair, these independent experts collectively had full discretion over the report's content. The final report is available at arXiv:2501.17805