🤖 AI Summary
The widespread deployment of foundation models introduces multifaceted AI risks, yet existing taxonomies lack practical guidance for practitioners to identify context-specific risks in real-world usage scenarios. Method: We propose the first risk identification framework tailored to foundation model *use governance*, grounded in four design principles that shift AI risk identification from static classification toward dynamic, scenario-aware, and actionable analysis. Integrating AI risk taxonomy, use governance theory, and requirements-driven engineering—validated through a case-driven paradigm—we develop an extensible prototype. Contribution/Results: Evaluated across representative deployment use cases, the prototype effectively identifies critical risks—including privacy leakage and algorithmic bias—demonstrating significantly enhanced practicality and operational feasibility. This work advances AI safety governance by delivering both a methodological foundation and an implementable tool for risk-aware foundation model deployment.
📝 Abstract
As foundation models grow in both popularity and capability, researchers have uncovered a variety of ways that the models can pose a risk to the model's owner, user, or others. Despite the efforts of measuring these risks via benchmarks and cataloging them in AI risk taxonomies, there is little guidance for practitioners on how to determine which risks are relevant for a given foundation model use. In this paper, we address this gap and develop requirements and an initial design for a risk identification framework. To do so, we look to prior literature to identify challenges for building a foundation model risk identification framework and adapt ideas from usage governance to synthesize four design requirements. We then demonstrate how a candidate framework can addresses these design requirements and provide a foundation model use example to show how the framework works in practice for a small subset of risks.