🤖 AI Summary
Current automated methods for evaluating interpretability struggle to keep pace with increasingly complex autonomous explanatory agents, particularly due to the subjectivity, opacity, and memory biases inherent in paradigms that aim to replicate human expert explanations. This work proposes a novel unsupervised intrinsic evaluation approach grounded in the functional interchangeability of model components and implements it within a large language model–driven autonomous research agent system that iteratively designs experiments and tests hypotheses in circuit analysis tasks. Experiments across six benchmark tasks show that while the system’s performance appears comparable to that of human experts, deeper analysis exposes fundamental flaws in replication-based evaluation. The proposed method effectively overcomes key limitations of traditional paradigms, offering a more reliable pathway for interpretability assessment.
📝 Abstract
Automated interpretability systems aim to reduce the need for human labor and scale analysis to increasingly large models and diverse tasks. Recent efforts toward this goal leverage large language models (LLMs) at increasing levels of autonomy, ranging from fixed one-shot workflows to fully autonomous interpretability agents. This shift creates a corresponding need to scale evaluation approaches to keep pace with both the volume and complexity of generated explanations. We investigate this challenge in the context of automated circuit analysis -- explaining the roles of model components when performing specific tasks. To this end, we build an agentic system in which a research agent iteratively designs experiments and refines hypotheses. When evaluated against human expert explanations across six circuit analysis tasks in the literature, the system appears competitive. However, closer examination reveals several pitfalls of replication-based evaluation: human expert explanations can be subjective or incomplete, outcome-based comparisons obscure the research process, and LLM-based systems may reproduce published findings via memorization or informed guessing. To address some of these pitfalls, we propose an unsupervised intrinsic evaluation based on the functional interchangeability of model components. Our work demonstrates fundamental challenges in evaluating complex automated interpretability systems and reveals key limitations of replication-based evaluation.