The AI Risk Repository: A Comprehensive Meta-Review, Database, and Taxonomy of Risks From Artificial Intelligence

πŸ“… 2024-08-14
πŸ›οΈ AGI - Artificial General Intelligence - Robotics - Safety & Alignment
πŸ“ˆ Citations: 27
✨ Influential: 3
πŸ“„ PDF
πŸ€– AI Summary
The absence of a consensus definition and classification framework for AI risks impedes interdisciplinary research, auditing practices, and policy coordination. Method: We construct the first publicly accessible, scalable, and structured AI risk knowledge base, systematically integrating 777 distinct risks from 43 existing taxonomies. We propose a novel dual-layer classification framework: a β€œcausal” layer (capturing entity-, intent-, and time-related dimensions) and a β€œdomain” layer (comprising seven high-level categories and 23 subcategories), enabling standardized cross-taxonomy mapping and dynamic evolution. Our methodology combines systematic literature review, Delphi expert consultation, and optimal matching synthesis, implemented via an online collaborative platform supporting multidimensional querying and continuous curation. Contribution/Results: This living knowledge base significantly improves consistency in risk identification and enhances coordinated response across stakeholders, establishing foundational infrastructure for AI governance, auditing, and interdisciplinary research.

Technology Category

Application Category

πŸ“ Abstract
The risks posed by Artificial Intelligence (AI) are of considerable concern to academics, auditors, policymakers, AI companies, and the public. However, a lack of shared understanding of AI risks can impede our ability to comprehensively discuss, research, and react to them. This paper addresses this gap by creating an AI Risk Repository to serve as a common frame of reference. This comprises a living database of 777 risks extracted from 43 taxonomies, which can be filtered based on two overarching taxonomies and easily accessed, modified, and updated via our website and online spreadsheets. We construct our Repository with a systematic review of taxonomies and other structured classifications of AI risk followed by an expert consultation. We develop our taxonomies of AI risk using a best-fit framework synthesis. Our high-level Causal Taxonomy of AI Risks classifies each risk by its causal factors (1) Entity: Human, AI; (2) Intentionality: Intentional, Unintentional; and (3) Timing: Pre-deployment; Post-deployment. Our mid-level Domain Taxonomy of AI Risks classifies risks into seven AI risk domains: (1) Discrimination & toxicity, (2) Privacy & security, (3) Misinformation, (4) Malicious actors & misuse, (5) Human-computer interaction, (6) Socioeconomic & environmental, and (7) AI system safety, failures, & limitations. These are further divided into 23 subdomains. The AI Risk Repository is, to our knowledge, the first attempt to rigorously curate, analyze, and extract AI risk frameworks into a publicly accessible, comprehensive, extensible, and categorized risk database. This creates a foundation for a more coordinated, coherent, and complete approach to defining, auditing, and managing the risks posed by AI systems.
Problem

Research questions and friction points this paper is trying to address.

Lack of shared understanding of AI risks
Need for a comprehensive AI risk database
Classification of AI risks by causal factors and domains
Innovation

Methods, ideas, or system contributions that make the work stand out.

Living database of 777 AI risks from 43 taxonomies
Two-tier taxonomy system for AI risk classification
Publicly accessible and extensible AI risk repository
πŸ”Ž Similar Papers
P
P. Slattery
MIT FutureTech, Massachusetts Institute of Technology
A
Alexander K. Saeri
MIT FutureTech, Massachusetts Institute of Technology
E
Emily A. C. Grundy
MIT FutureTech, Massachusetts Institute of Technology
J
Jessica Graham
School of Psychology, The University of Queensland
Michael Noetel
Michael Noetel
Ready Research
Risto Uuk
Risto Uuk
Head of EU Policy and Research, Future of Life Institute
EU AI Actgeneral-purpose AI regulationsystemic risks
James Dao
James Dao
Harmony Intelligence
S
Soroush Pour
Harmony Intelligence
Stephen Casper
Stephen Casper
PhD student, MIT
AI safetyAI responsibilityred-teamingrobustnessauditing
Neil Thompson
Neil Thompson
Director, MIT FutureTech at Computer Science and A.I. Lab and the Initiative on the Digital Economy
Moore's Law and Computer PerformanceTools and InnovationPatenting & LicensingExecuting on