π€ AI Summary
The absence of a consensus definition and classification framework for AI risks impedes interdisciplinary research, auditing practices, and policy coordination. Method: We construct the first publicly accessible, scalable, and structured AI risk knowledge base, systematically integrating 777 distinct risks from 43 existing taxonomies. We propose a novel dual-layer classification framework: a βcausalβ layer (capturing entity-, intent-, and time-related dimensions) and a βdomainβ layer (comprising seven high-level categories and 23 subcategories), enabling standardized cross-taxonomy mapping and dynamic evolution. Our methodology combines systematic literature review, Delphi expert consultation, and optimal matching synthesis, implemented via an online collaborative platform supporting multidimensional querying and continuous curation. Contribution/Results: This living knowledge base significantly improves consistency in risk identification and enhances coordinated response across stakeholders, establishing foundational infrastructure for AI governance, auditing, and interdisciplinary research.
π Abstract
The risks posed by Artificial Intelligence (AI) are of considerable concern to academics, auditors, policymakers, AI companies, and the public. However, a lack of shared understanding of AI risks can impede our ability to comprehensively discuss, research, and react to them. This paper addresses this gap by creating an AI Risk Repository to serve as a common frame of reference. This comprises a living database of 777 risks extracted from 43 taxonomies, which can be filtered based on two overarching taxonomies and easily accessed, modified, and updated via our website and online spreadsheets. We construct our Repository with a systematic review of taxonomies and other structured classifications of AI risk followed by an expert consultation. We develop our taxonomies of AI risk using a best-fit framework synthesis. Our high-level Causal Taxonomy of AI Risks classifies each risk by its causal factors (1) Entity: Human, AI; (2) Intentionality: Intentional, Unintentional; and (3) Timing: Pre-deployment; Post-deployment. Our mid-level Domain Taxonomy of AI Risks classifies risks into seven AI risk domains: (1) Discrimination & toxicity, (2) Privacy & security, (3) Misinformation, (4) Malicious actors & misuse, (5) Human-computer interaction, (6) Socioeconomic & environmental, and (7) AI system safety, failures, & limitations. These are further divided into 23 subdomains. The AI Risk Repository is, to our knowledge, the first attempt to rigorously curate, analyze, and extract AI risk frameworks into a publicly accessible, comprehensive, extensible, and categorized risk database. This creates a foundation for a more coordinated, coherent, and complete approach to defining, auditing, and managing the risks posed by AI systems.