🤖 AI Summary
This study addresses the gap between the EU AI Act’s draft Code of Practice for General-Purpose AI (GPAI) and industry implementation. Methodologically, it conducts systematic content analysis and structured policy mapping to align—line by line—the Code’s 16 state-of-the-art model safety requirements with publicly disclosed safety practices from 12 leading AI developers (e.g., OpenAI, Anthropic, Google DeepMind). Leveraging multi-source documentation—including safety frameworks, model cards, and technical reports—it constructs a verifiable evidence repository to identify both existing industry measures and critical compliance gaps. The key contribution is the first traceable, neutral “regulation–practice” mapping benchmark, which empirically exposes the cognitive disconnect between regulatory intent and engineering execution. This benchmark provides policymakers and practitioners with a shared, evidence-based foundation for constructive dialogue and targeted alignment efforts.
📝 Abstract
This report provides a detailed comparison between the measures proposed in the EU AI Act's General-Purpose AI (GPAI) Code of Practice (Third Draft) and current practices adopted by leading AI companies. As the EU moves toward enforcing binding obligations for GPAI model providers, the Code of Practice will be key to bridging legal requirements with concrete technical commitments. Our analysis focuses on the draft's Safety and Security section which is only relevant for the providers of the most advanced models (Commitments II.1-II.16) and excerpts from current public-facing documents quotes that are relevant to each individual measure. We systematically reviewed different document types - including companies' frontier safety frameworks and model cards - from over a dozen companies, including OpenAI, Anthropic, Google DeepMind, Microsoft, Meta, Amazon, and others. This report is not meant to be an indication of legal compliance nor does it take any prescriptive viewpoint about the Code of Practice or companies' policies. Instead, it aims to inform the ongoing dialogue between regulators and GPAI model providers by surfacing evidence of precedent.