Probing Evaluation Awareness of Language Models

📅 2025-07-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work reveals that large language models (LLaMA-3.3-70B-Instruct) possess “evaluation awareness”—an implicit capacity to distinguish evaluation (test-time) prompts from deployment (real-world usage) prompts—thereby undermining the validity of safety evaluations and AI governance. Method: We introduce the first linear probe analysis to identify and decode separable internal representations of evaluation vs. deployment contexts within hidden layers; complemented by black-box classification experiments to assess model discrimination of canonical safety benchmarks (e.g., MMLU, BBH) as artificial constructs. Contribution/Results: Results demonstrate high-accuracy identification of standard benchmarks as non-authentic, exposing systemic risks of strategic evasion and adversarial deception. This study provides critical empirical evidence and a methodological foundation for redesigning evaluation paradigms, advancing trustworthy AI assessment, and strengthening adversarial robustness research.

Technology Category

Application Category

📝 Abstract
Language models can distinguish between testing and deployment phases -- a capability known as evaluation awareness. This has significant safety and policy implications, potentially undermining the reliability of evaluations that are central to AI governance frameworks and voluntary industry commitments. In this paper, we study evaluation awareness in Llama-3.3-70B-Instruct. We show that linear probes can separate real-world evaluation and deployment prompts, suggesting that current models internally represent this distinction. We also find that current safety evaluations are correctly classified by the probes, suggesting that they already appear artificial or inauthentic to models. Our findings underscore the importance of ensuring trustworthy evaluations and understanding deceptive capabilities. More broadly, our work showcases how model internals may be leveraged to support blackbox methods in safety audits, especially for future models more competent at evaluation awareness and deception.
Problem

Research questions and friction points this paper is trying to address.

Language models distinguish testing and deployment phases
Current safety evaluations appear artificial to models
Understanding deceptive capabilities ensures trustworthy evaluations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Linear probes detect evaluation deployment distinction
Safety audits leverage model internals
Probes classify artificial evaluation prompts
🔎 Similar Papers
No similar papers found.