Existing Industry Practice for the EU AI Act's General-Purpose AI Code of Practice Safety and Security Measures

📅 2025-04-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the gap between the EU AI Act’s draft Code of Practice for General-Purpose AI (GPAI) and industry implementation. Methodologically, it conducts systematic content analysis and structured policy mapping to align—line by line—the Code’s 16 state-of-the-art model safety requirements with publicly disclosed safety practices from 12 leading AI developers (e.g., OpenAI, Anthropic, Google DeepMind). Leveraging multi-source documentation—including safety frameworks, model cards, and technical reports—it constructs a verifiable evidence repository to identify both existing industry measures and critical compliance gaps. The key contribution is the first traceable, neutral “regulation–practice” mapping benchmark, which empirically exposes the cognitive disconnect between regulatory intent and engineering execution. This benchmark provides policymakers and practitioners with a shared, evidence-based foundation for constructive dialogue and targeted alignment efforts.

Technology Category

Application Category

📝 Abstract
This report provides a detailed comparison between the measures proposed in the EU AI Act's General-Purpose AI (GPAI) Code of Practice (Third Draft) and current practices adopted by leading AI companies. As the EU moves toward enforcing binding obligations for GPAI model providers, the Code of Practice will be key to bridging legal requirements with concrete technical commitments. Our analysis focuses on the draft's Safety and Security section which is only relevant for the providers of the most advanced models (Commitments II.1-II.16) and excerpts from current public-facing documents quotes that are relevant to each individual measure. We systematically reviewed different document types - including companies' frontier safety frameworks and model cards - from over a dozen companies, including OpenAI, Anthropic, Google DeepMind, Microsoft, Meta, Amazon, and others. This report is not meant to be an indication of legal compliance nor does it take any prescriptive viewpoint about the Code of Practice or companies' policies. Instead, it aims to inform the ongoing dialogue between regulators and GPAI model providers by surfacing evidence of precedent.
Problem

Research questions and friction points this paper is trying to address.

Compares EU AI Act's GPAI Code with industry practices
Analyzes Safety and Security measures for advanced AI models
Reviews documents from leading AI companies for regulatory dialogue
Innovation

Methods, ideas, or system contributions that make the work stand out.

Compares EU AI Act measures with industry practices
Analyzes Safety and Security for advanced AI models
Reviews documents from leading AI companies systematically
🔎 Similar Papers
No similar papers found.
Lily Stelling
Lily Stelling
SaferAI
Artificial IntelligenceAI risk management
M
Mick Yang
University of Pennsylvania
R
Rokas Gipivskis
AI Standards Lab, Vilnius University
Leon Staufer
Leon Staufer
University of Cambridge, Technical University of Munich, Ludwig Maximilian University of Munich
Z
Ze Shen Chin
AI Standards Lab
S
Sim'eon Campos
SaferAI
Michael Chen
Michael Chen
Undergraduate, Carnegie Mellon University