🤖 AI Summary
This work addresses the challenge of effectively evaluating alignment auditing techniques for language models exhibiting hidden behaviors. It introduces AuditBench, the first systematic benchmark for alignment auditing, comprising 56 models implanted with diverse hidden behaviors, and develops an autonomous investigative agent capable of integrating configurable auditing tools to systematically assess method effectiveness. Leveraging synthetic document training, demonstration learning, auxiliary-model-guided prompt generation, and both black-box and white-box tool integration, the study finds that black-box auditing tools employing prompts generated by auxiliary models are most effective. Models trained on synthetic data prove more auditable, whereas adversarial training substantially increases auditing difficulty. The findings reveal a significant performance gap between current auditing tools and autonomous agents, underscoring the practical advantages of black-box approaches.
📝 Abstract
We introduce AuditBench, an alignment auditing benchmark. AuditBench consists of 56 language models with implanted hidden behaviors. Each model has one of 14 concerning behaviors--such as sycophantic deference, opposition to AI regulation, or secret geopolitical loyalties--which it does not confess to when directly asked. AuditBench models are highly diverse--some are subtle, while others are overt, and we use varying training techniques both for implanting behaviors and training models not to confess. To demonstrate AuditBench's utility, we develop an investigator agent that autonomously employs a configurable set of auditing tools. By measuring investigator agent success using different tools, we can evaluate their efficacy. Notably, we observe a tool-to-agent gap, where tools that perform well in standalone non-agentic evaluations fail to translate into improved performance when used with our investigator agent. We find that our most effective tools involve scaffolded calls to auxiliary models that generate diverse prompts for the target. White-box interpretability tools can be helpful, but the agent performs best with black-box tools. We also find that audit success varies greatly across training techniques: models trained on synthetic documents are easier to audit than models trained on demonstrations, with better adversarial training further increasing auditing difficulty. We release our models, agent, and evaluation framework to support future quantitative, iterative science on alignment auditing.