HyperDAS: Towards Automating Mechanistic Interpretability with Hypernetworks

📅 2025-03-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the reliance on manual annotation and risk of external information injection in concept-feature localization for neural network interpretability. We propose the first hypernetwork framework tailored for mechanistic interpretability. Methodologically, we integrate a Transformer-based hypernetwork with Distributed Alignment Search (DAS) to automatically discover concept locations within residual streams and jointly model associated features; further, counterfactual supervised learning and residual stream vector disentanglement eliminate brute-force search and extraneous information injection. Theoretical analysis and empirical evaluation confirm minimal interference with the original model’s behavior. Our approach achieves state-of-the-art performance on the RAVEL benchmark and efficiently disentangles semantic concepts—such as birth year—in Llama3-8B, significantly enhancing both automation and faithfulness in large language model interpretation.

Technology Category

Application Category

📝 Abstract
Mechanistic interpretability has made great strides in identifying neural network features (e.g., directions in hidden activation space) that mediate concepts(e.g., the birth year of a person) and enable predictable manipulation. Distributed alignment search (DAS) leverages supervision from counterfactual data to learn concept features within hidden states, but DAS assumes we can afford to conduct a brute force search over potential feature locations. To address this, we present HyperDAS, a transformer-based hypernetwork architecture that (1) automatically locates the token-positions of the residual stream that a concept is realized in and (2) constructs features of those residual stream vectors for the concept. In experiments with Llama3-8B, HyperDAS achieves state-of-the-art performance on the RAVEL benchmark for disentangling concepts in hidden states. In addition, we review the design decisions we made to mitigate the concern that HyperDAS (like all powerful interpretabilty methods) might inject new information into the target model rather than faithfully interpreting it.
Problem

Research questions and friction points this paper is trying to address.

Automates locating concept features in neural networks
Improves efficiency over brute force search methods
Ensures interpretability without injecting new information
Innovation

Methods, ideas, or system contributions that make the work stand out.

HyperDAS automates concept feature location in neural networks.
Uses transformer-based hypernetwork for residual stream analysis.
Achieves top performance on RAVEL benchmark with Llama3-8B.