🤖 AI Summary
Current AI model risk reporting is severely inadequate: only 14% of model cards address risks, and 96% of such content is generic, non-contextual, and operationally unusable. Method: This paper introduces the first RAG-driven risk reporting system for AI governance, integrating a knowledge graph built from 450K model cards and 600 real-world AI incidents, and proposing five novel, empirically grounded risk reporting design principles—validated through co-design with 16 developers. The system jointly generates risk identification, prioritization, contextual modeling, and mitigation strategies. Contribution/Results: Evaluated with 88 cross-domain practitioners, the system significantly improves risk awareness quality and model selection rigor. It fully satisfies requirements for operational feasibility, scenario adaptability, and engineering practicality—bridging the critical gap between abstract risk documentation and actionable, context-aware governance.
📝 Abstract
Risk reporting is essential for documenting AI models, yet only 14% of model cards mention risks, out of which 96% copying content from a small set of cards, leading to a lack of actionable insights. Existing proposals for improving model cards do not resolve these issues. To address this, we introduce RiskRAG, a Retrieval Augmented Generation based risk reporting solution guided by five design requirements we identified from literature, and co-design with 16 developers: identifying diverse model-specific risks, clearly presenting and prioritizing them, contextualizing for real-world uses, and offering actionable mitigation strategies. Drawing from 450K model cards and 600 real-world incidents, RiskRAG pre-populates contextualized risk reports. A preliminary study with 50 developers showed that they preferred RiskRAG over standard model cards, as it better met all the design requirements. A final study with 38 developers, 40 designers, and 37 media professionals showed that RiskRAG improved their way of selecting the AI model for a specific application, encouraging a more careful and deliberative decision-making. The RiskRAG project page is accessible at: https://social-dynamics.net/ai-risks/card.