🤖 AI Summary
This study identifies a critical “benchmark–regulation gap” between current AI evaluation practices and the EU AI Act’s regulatory requirements: mainstream benchmarks overwhelmingly prioritize hallucination (42.1%) and bias (40.5%)—together constituting 82.6% of test items—while systematically neglecting high-impact systemic risks: loss of control (0.4% coverage), cyberattack capability (0.8%), self-replication, and supervision evasion; crucial autonomous behavioral competencies are entirely absent. To bridge this gap, we propose Bench-2-CoP, the first framework that systematically maps over 190,000 benchmark questions to the AI Act’s legally defined capabilities and propensity categories, leveraging an LLM-as-judge paradigm. This enables quantitative, regulation-aligned assessment of AI systems. Our work establishes a methodological foundation and empirical basis for compliance-driven AI evaluation, advancing regulatory science in trustworthy AI.
📝 Abstract
The rapid advancement of General Purpose AI (GPAI) models necessitates robust evaluation frameworks, especially with emerging regulations like the EU AI Act and its associated Code of Practice (CoP). Current AI evaluation practices depend heavily on established benchmarks, but these tools were not designed to measure the systemic risks that are the focus of the new regulatory landscape. This research addresses the urgent need to quantify this "benchmark-regulation gap." We introduce Bench-2-CoP, a novel, systematic framework that uses validated LLM-as-judge analysis to map the coverage of 194,955 questions from widely-used benchmarks against the EU AI Act's taxonomy of model capabilities and propensities. Our findings reveal a profound misalignment: the evaluation ecosystem is overwhelmingly focused on a narrow set of behavioral propensities, such as "Tendency to hallucinate" (53.7% of the corpus) and "Discriminatory bias" (28.9%), while critical functional capabilities are dangerously neglected. Crucially, capabilities central to loss-of-control scenarios, including evading human oversight, self-replication, and autonomous AI development, receive zero coverage in the entire benchmark corpus. This translates to a near-total evaluation gap for systemic risks like "Loss of Control" (0.4% coverage) and "Cyber Offence" (0.8% coverage). This study provides the first comprehensive, quantitative analysis of this gap, offering critical insights for policymakers to refine the CoP and for developers to build the next generation of evaluation tools, ultimately fostering safer and more compliant AI.