AI Risk-Management Standards Profile for General-Purpose AI (GPAI) and Foundation Models

📅 2025-06-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
General-purpose AI (GPAI) and foundation models introduce systemic safety and compliance risks due to their unpredictability, scale, and deployment complexity. Method: This paper proposes the first structured risk management framework specifically tailored to GPAI characteristics. It innovatively integrates the NIST AI Risk Management Framework (AI RMF) with ISO/IEC 23894, adapting these international standards to address generative AI’s unique risk profile across the full lifecycle—identification, analysis, and mitigation. Contribution/Results: The framework delivers three key advances: (1) the first standardized localization of ISO/IEC 23894 and NIST AI RMF to GPAI contexts; (2) a practical, actionable catalog of control measures with implementation guidance; and (3) a harmonized risk management specification balancing technical feasibility and regulatory requirements. Validated with large foundation model developers and downstream application providers, it significantly enhances standardization in risk governance and improves the trustworthiness of AI systems.

Technology Category

Application Category

📝 Abstract
Increasingly multi-purpose AI models, such as cutting-edge large language models or other 'general-purpose AI' (GPAI) models, 'foundation models,' generative AI models, and 'frontier models' (typically all referred to hereafter with the umbrella term 'GPAI/foundation models' except where greater specificity is needed), can provide many beneficial capabilities but also risks of adverse events with profound consequences. This document provides risk-management practices or controls for identifying, analyzing, and mitigating risks of GPAI/foundation models. We intend this document primarily for developers of large-scale, state-of-the-art GPAI/foundation models; others that can benefit from this guidance include downstream developers of end-use applications that build on a GPAI/foundation model. This document facilitates conformity with or use of leading AI risk management-related standards, adapting and building on the generic voluntary guidance in the NIST AI Risk Management Framework and ISO/IEC 23894, with a focus on the unique issues faced by developers of GPAI/foundation models.
Problem

Research questions and friction points this paper is trying to address.

Managing risks of multi-purpose AI models like GPAI and foundation models
Providing risk-management practices for identifying and mitigating AI risks
Adapting standards for GPAI developers based on NIST and ISO guidelines
Innovation

Methods, ideas, or system contributions that make the work stand out.

Risk-management practices for GPAI/foundation models
Adapts NIST and ISO/IEC standards guidance
Focuses on unique issues for GPAI developers
🔎 Similar Papers
No similar papers found.
A
Anthony M. Barrett
UC Berkeley, Center for Long-Term Cybersecurity
J
Jessica Newman
UC Berkeley, Center for Long-Term Cybersecurity
B
Brandie Nonnecke
UC Berkeley, Center for Long-Term Cybersecurity
N
Nada Madkour
UC Berkeley, Center for Long-Term Cybersecurity
Dan Hendrycks
Dan Hendrycks
Director of the Center for AI Safety (advisor for xAI and Scale)
AI SafetyML Reliability
E
Evan R. Murphy
UC Berkeley, Center for Long-Term Cybersecurity
K
Krystal Jackson
UC Berkeley, Center for Long-Term Cybersecurity
D
Deepika Raman
UC Berkeley, Center for Long-Term Cybersecurity