🤖 AI Summary
AI developers and researchers disproportionately emphasize technical risks in risk reporting, while systematically overlooking human-interaction–driven harms—such as fraud and manipulation—that manifest in real-world deployments. Method: We conduct the first large-scale comparative analysis of 460,000 Hugging Face model cards, the MIT AI Risk Repository, and the AI Incident Database, employing NLP-driven risk classification and manual coding to construct a publicly available, empirically grounded AI model risk taxonomy comprising 3,000 distinct risk entries. Contribution/Results: Our analysis reveals a significant misalignment between developer/researcher risk priorities and the actual distribution of documented AI incidents—particularly underestimating socio-technical and interactive risks. We therefore propose integrating structured, human-centered, and systemic risk assessment frameworks early in AI system design. This work provides both empirical evidence and methodological infrastructure to advance robust, human-aligned AI risk governance.
📝 Abstract
We analyzed nearly 460,000 AI model cards from Hugging Face to examine how developers report risks. From these, we extracted around 3,000 unique risk mentions and built the emph{AI Model Risk Catalog}. We compared these with risks identified by researchers in the MIT Risk Repository and with real-world incidents from the AI Incident Database. Developers focused on technical issues like bias and safety, while researchers emphasized broader social impacts. Both groups paid little attention to fraud and manipulation, which are common harms arising from how people interact with AI. Our findings show the need for clearer, structured risk reporting that helps developers think about human-interaction and systemic risks early in the design process. The catalog and paper appendix are available at: https://social-dynamics.net/ai-risks/catalog.