The Elicitation Game: Evaluating Capability Elicitation Techniques

📅 2025-02-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the reliable evaluation and controllable elicitation of latent capabilities in large language models (LLMs). We propose constructing controllable “model organisms” by explicitly modeling latent capabilities via two mechanisms: password locking and circuit breaking. Using these controlled models, we systematically evaluate prominent elicitation paradigms—including prompt engineering, activation-space steering, and supervised fine-tuning—on multiple-choice question answering (MCQA) and code generation tasks. To our knowledge, this is the first work to quantitatively compare diverse elicitation methods within a unified framework, revealing synergistic gains from their combinations. Crucially, circuit-breaking training substantially improves model robustness to elicitation techniques. Experiments demonstrate that prompt-based methods fully elicit MCQA capabilities in password-locked models but fail entirely on circuit-broken ones; only supervised fine-tuning effectively unlocks code-generation capability in circuit-broken models; and fine-tuning emerges as the most robust strategy for enhancing evaluation reliability.

Technology Category

Application Category

📝 Abstract
Capability evaluations are required to understand and regulate AI systems that may be deployed or further developed. Therefore, it is important that evaluations provide an accurate estimation of an AI system's capabilities. However, in numerous cases, previously latent capabilities have been elicited from models, sometimes long after initial release. Accordingly, substantial efforts have been made to develop methods for eliciting latent capabilities from models. In this paper, we evaluate the effectiveness of capability elicitation techniques by intentionally training model organisms -- language models with hidden capabilities that are revealed by a password. We introduce a novel method for training model organisms, based on circuit breaking, which is more robust to elicitation techniques than standard password-locked models. We focus on elicitation techniques based on prompting and activation steering, and compare these to fine-tuning methods. Prompting techniques can elicit the actual capability of both password-locked and circuit-broken model organisms in an MCQA setting, while steering fails to do so. For a code-generation task, only fine-tuning can elicit the hidden capabilities of our novel model organism. Additionally, our results suggest that combining techniques improves elicitation. Still, if possible, fine-tuning should be the method of choice to improve the trustworthiness of capability evaluations.
Problem

Research questions and friction points this paper is trying to address.

Evaluate effectiveness of capability elicitation techniques
Compare prompting, activation steering, and fine-tuning methods
Improve trustworthiness of AI capability evaluations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Circuit breaking for model organism training
Prompting techniques for capability elicitation
Fine-tuning enhances hidden capability detection
🔎 Similar Papers
No similar papers found.