A Taxonomy of Omnicidal Futures Involving Artificial Intelligence

📅 2025-07-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the extreme yet critically underexamined risk of human extinction—specifically, global population collapse—posed by advanced artificial intelligence. Recognizing the absence of a unified analytical framework for such existential risks, the study develops the first comprehensive taxonomy of AI-driven extinction pathways, encompassing technical misalignment, malicious deployment, and systemic institutional failure. Methodologically, it integrates scenario analysis, cross-disciplinary forecasting, and quantitative risk modeling, jointly accounting for technological evolution, socio-institutional dynamics, and agent-level behavioral incentives. The resulting framework bridges a foundational theoretical gap in AI safety research concerning systematic representation of catastrophic risks. Moreover, it yields an openly accessible, empirically grounded case repository—designed for public deliberation, policy calibration, and iterative validation—that directly supports the development of robust, preventive global AI governance mechanisms.

Technology Category

Application Category

📝 Abstract
This report presents a taxonomy and examples of potential omnicidal events resulting from AI: scenarios where all or almost all humans are killed. These events are not presented as inevitable, but as possibilities that we can work to avoid. Insofar as large institutions require a degree of public support in order to take certain actions, we hope that by presenting these possibilities in public, we can help to support preventive measures against catastrophic risks from AI.
Problem

Research questions and friction points this paper is trying to address.

Classifying potential AI-driven human extinction scenarios
Exploring preventable omnicidal risks from artificial intelligence
Promoting public awareness to mitigate catastrophic AI outcomes
Innovation

Methods, ideas, or system contributions that make the work stand out.

Taxonomy of AI omnicidal scenarios
Public presentation of catastrophic risks
Support for preventive AI measures
🔎 Similar Papers
No similar papers found.