ALMANACS: A Simulatability Benchmark for Language Model Explainability

📅 2023-12-20
🏛️ arXiv.org
📈 Citations: 4
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the lack of predictive validity in current language model interpretability evaluations by proposing SimulatBench—the first fully automated benchmark centered on *simulatability*, i.e., the extent to which an explanation enables accurate prediction of a target model’s behavior. Methodologically, it introduces an LLM-driven behavioral prediction framework: a secondary language model predicts the target model’s outputs on novel inputs—including out-of-distribution examples—solely from given explanations, thereby eliminating human evaluation bottlenecks. The benchmark covers 12 safety-critical reasoning tasks and integrates prominent explanation methods, including counterfactuals, rationalizations, attention visualizations, and Integrated Gradients. Key contributions include: (1) formalizing simulatability as a quantifiable, scalable, and fully automated evaluation paradigm; and (2) empirically demonstrating that all mainstream explanation methods fail to significantly outperform the no-explanation baseline in behavioral prediction—revealing a fundamental challenge in interpretability research.
📝 Abstract
How do we measure the efficacy of language model explainability methods? While many explainability methods have been developed, they are typically evaluated on bespoke tasks, preventing an apples-to-apples comparison. To help fill this gap, we present ALMANACS, a language model explainability benchmark. ALMANACS scores explainability methods on simulatability, i.e., how well the explanations improve behavior prediction on new inputs. The ALMANACS scenarios span twelve safety-relevant topics such as ethical reasoning and advanced AI behaviors; they have idiosyncratic premises to invoke model-specific behavior; and they have a train-test distributional shift to encourage faithful explanations. By using another language model to predict behavior based on the explanations, ALMANACS is a fully automated benchmark. While not a replacement for human evaluations, we aim for ALMANACS to be a complementary, automated tool that allows for fast, scalable evaluation. Using ALMANACS, we evaluate counterfactual, rationalization, attention, and Integrated Gradients explanations. Our results are sobering: when averaged across all topics, no explanation method outperforms the explanation-free control. We conclude that despite modest successes in prior work, developing an explanation method that aids simulatability in ALMANACS remains an open challenge.
Problem

Research questions and friction points this paper is trying to address.

Evaluation
Comparison
Interpretability Methods
Innovation

Methods, ideas, or system contributions that make the work stand out.

ALMANACS
Automated Evaluation
Language Model Interpretability
🔎 Similar Papers
No similar papers found.