APP: Accelerated Path Patching with Task-Specific Pruning

📅 2025-11-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing circuit discovery methods—such as Path Patching—suffer from high computational overhead and struggle to perform deep-layer circuit analysis in small-scale models. To address this, we propose Contrastive-FLAP, a novel task-oriented attention head pruning algorithm that introduces causal mediation analysis for the first time into structured pruning. By integrating contrastive pruning, causal mediation analysis, and path patching, Contrastive-FLAP drastically reduces the search space—by an average of 56%—enabling efficient circuit discovery in sparse models. Experiments demonstrate that, while preserving circuit performance nearly identically to baselines (Pearson correlation >0.98), Contrastive-FLAP accelerates computation by 59.63%–93.27%. This substantial speedup significantly enhances the efficiency of circuit identification in mechanistic interpretability research, without compromising fidelity or explanatory power.

Technology Category

Application Category

📝 Abstract
Circuit discovery is a key step in many mechanistic interpretability pipelines. Current methods, such as Path Patching, are computationally expensive and have limited in-depth circuit analysis for smaller models. In this study, we propose Accelerated Path Patching (APP), a hybrid approach leveraging our novel contrastive attention head pruning method to drastically reduce the search space of circuit discovery methods. Our Contrastive-FLAP pruning algorithm uses techniques from causal mediation analysis to assign higher pruning scores to task-specific attention heads, leading to higher performing sparse models compared to traditional pruning techniques. Although Contrastive-FLAP is successful at preserving task-specific heads that existing pruning algorithms remove at low sparsity ratios, the circuits found by Contrastive-FLAP alone are too large to satisfy the minimality constraint required in circuit analysis. APP first applies Contrastive-FLAP to reduce the search space on required for circuit discovery algorithms by, on average, 56%. Next, APP, applies traditional Path Patching on the remaining attention heads, leading to a speed up of 59.63%-93.27% compared to Path Patching applied to the dense model. Despite the substantial computational saving that APP provides, circuits obtained from APP exhibit substantial overlap and similar performance to previously established Path Patching circuits
Problem

Research questions and friction points this paper is trying to address.

Accelerating circuit discovery by reducing computational costs of Path Patching
Improving task-specific attention head pruning via contrastive causal mediation analysis
Achieving minimal interpretable circuits while maintaining performance and overlap
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hybrid approach with contrastive attention head pruning
Contrastive-FLAP algorithm assigns task-specific pruning scores
Reduces search space and accelerates path patching
🔎 Similar Papers
No similar papers found.