🤖 AI Summary
This paper addresses the extreme yet critically underexamined risk of human extinction—specifically, global population collapse—posed by advanced artificial intelligence. Recognizing the absence of a unified analytical framework for such existential risks, the study develops the first comprehensive taxonomy of AI-driven extinction pathways, encompassing technical misalignment, malicious deployment, and systemic institutional failure. Methodologically, it integrates scenario analysis, cross-disciplinary forecasting, and quantitative risk modeling, jointly accounting for technological evolution, socio-institutional dynamics, and agent-level behavioral incentives. The resulting framework bridges a foundational theoretical gap in AI safety research concerning systematic representation of catastrophic risks. Moreover, it yields an openly accessible, empirically grounded case repository—designed for public deliberation, policy calibration, and iterative validation—that directly supports the development of robust, preventive global AI governance mechanisms.
📝 Abstract
This report presents a taxonomy and examples of potential omnicidal events resulting from AI: scenarios where all or almost all humans are killed. These events are not presented as inevitable, but as possibilities that we can work to avoid. Insofar as large institutions require a degree of public support in order to take certain actions, we hope that by presenting these possibilities in public, we can help to support preventive measures against catastrophic risks from AI.