SpecEval: Evaluating Model Adherence to Behavior Specifications

📅 2025-09-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
A systematic audit of whether foundation models actually adhere to their developers’ published behavioral guidelines remains absent. Method: We propose a tripartite consistency evaluation framework that (i) parses behavioral statements via natural language processing, (ii) generates targeted prompts, and (iii) leverages the model itself as an internal judge to assess consistency among *guideline*, *model output*, and *self-judgment*—extending beyond conventional generator-verifier paradigms. Contribution/Results: The framework enables the first large-scale, cross-organizational, automated compliance audit across 16 models from six developers, covering over 100 behavioral statements. Experiments reveal up to a 20% compliance gap, exposing pervasive and substantial systemic inconsistencies in guideline adherence. This work delivers a scalable, empirically grounded assessment tool for AI governance and responsible model deployment.

Technology Category

Application Category

📝 Abstract
Companies that develop foundation models publish behavioral guidelines they pledge their models will follow, but it remains unclear if models actually do so. While providers such as OpenAI, Anthropic, and Google have published detailed specifications describing both desired safety constraints and qualitative traits for their models, there has been no systematic audit of adherence to these guidelines. We introduce an automated framework that audits models against their providers specifications by parsing behavioral statements, generating targeted prompts, and using models to judge adherence. Our central focus is on three way consistency between a provider specification, its model outputs, and its own models as judges; an extension of prior two way generator validator consistency. This establishes a necessary baseline: at minimum, a foundation model should consistently satisfy the developer behavioral specifications when judged by the developer evaluator models. We apply our framework to 16 models from six developers across more than 100 behavioral statements, finding systematic inconsistencies including compliance gaps of up to 20 percent across providers.
Problem

Research questions and friction points this paper is trying to address.

Auditing model adherence to provider behavior specifications
Establishing three-way consistency between specifications, outputs, and judgments
Identifying systematic compliance gaps across multiple foundation models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Automated framework audits model adherence
Parses behavioral statements and generates prompts
Uses models as judges for three-way consistency
🔎 Similar Papers
No similar papers found.