Automating AI Failure Tracking: Semantic Association of Reports in AI Incident Database

📅 2025-07-31
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
The AI Incident Database (AIID) currently relies on manual expert annotation to associate new incident reports with existing events, resulting in poor scalability and delayed identification of emerging failure patterns. Method: We propose an automated fault report categorization framework that formulates the association task as a semantic ranking problem. Our approach constructs multi-granularity representations by jointly encoding report titles and descriptions, employs Transformer-based sentence embeddings to generate dense vector representations, and performs efficient retrieval via cosine similarity. We systematically compare bag-of-words and cross-encoder architectures. Contributions/Results: Experiments demonstrate that our method significantly outperforms baselines across metrics including Recall@5; incorporating full-text descriptions—beyond titles alone—improves matching accuracy; the model exhibits robustness to variations in description length; and performance consistently improves with increasing training data scale. This framework provides a scalable, real-time solution for maintaining AI safety incident knowledge bases.

Technology Category

Application Category

📝 Abstract
Artificial Intelligence (AI) systems are transforming critical sectors such as healthcare, finance, and transportation, enhancing operational efficiency and decision-making processes. However, their deployment in high-stakes domains has exposed vulnerabilities that can result in significant societal harm. To systematically study and mitigate these risk, initiatives like the AI Incident Database (AIID) have emerged, cataloging over 3,000 real-world AI failure reports. Currently, associating a new report with the appropriate AI Incident relies on manual expert intervention, limiting scalability and delaying the identification of emerging failure patterns. To address this limitation, we propose a retrieval-based framework that automates the association of new reports with existing AI Incidents through semantic similarity modeling. We formalize the task as a ranking problem, where each report-comprising a title and a full textual description-is compared to previously documented AI Incidents based on embedding cosine similarity. Benchmarking traditional lexical methods, cross-encoder architectures, and transformer-based sentence embedding models, we find that the latter consistently achieve superior performance. Our analysis further shows that combining titles and descriptions yields substantial improvements in ranking accuracy compared to using titles alone. Moreover, retrieval performance remains stable across variations in description length, highlighting the robustness of the framework. Finally, we find that retrieval performance consistently improves as the training set expands. Our approach provides a scalable and efficient solution for supporting the maintenance of the AIID.
Problem

Research questions and friction points this paper is trying to address.

Automate association of new reports with existing AI incidents
Improve scalability in tracking AI failure patterns
Enhance semantic similarity modeling for AI incident reports
Innovation

Methods, ideas, or system contributions that make the work stand out.

Automates report association via semantic similarity
Uses transformer-based embedding models for accuracy
Combines titles and descriptions to improve ranking
🔎 Similar Papers
No similar papers found.