Attribution-guided Pruning for Compression, Circuit Discovery, and Targeted Correction in LLMs

📅 2025-06-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the dual challenges of deploying large language models (LLMs)—namely, high memory and computational overhead, and safety risks such as toxic outputs. To this end, we propose an attribution-guided unstructured pruning framework grounded in Layer-wise Relevance Propagation (LRP). This work is the first to extend LRP to fine-grained attribution and pruning in LLMs, unifying three objectives: (1) efficient model compression—achieving >50% parameter pruning on Llama and OPT with negligible performance degradation; (2) automatic identification of task-critical subcircuits (e.g., indirect object recognition pathways); and (3) targeted mitigation of harmful behaviors. Unlike conventional parameter-level pruning, our approach enables functional-level intervention, enhancing both interpretability and safety. By bridging explainability and structured model editing, it establishes a novel paradigm for lightweight, trustworthy LLM deployment.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) are central to many contemporary AI applications, yet their extensive parameter counts pose significant challenges for deployment in memory- and compute-constrained environments. Recent works in eXplainable AI (XAI), particularly on attribution methods, suggest that interpretability can also enable model compression by identifying and removing components irrelevant to inference. In this paper, we leverage Layer-wise Relevance Propagation (LRP) to perform attribution-guided pruning of LLMs. While LRP has shown promise in structured pruning for vision models, we extend it to unstructured pruning in LLMs and demonstrate that it can substantially reduce model size with minimal performance loss. Our method is especially effective in extracting task-relevant subgraphs -- so-called ``circuits'' -- which can represent core functions (e.g., indirect object identification). Building on this, we introduce a technique for model correction, by selectively removing circuits responsible for spurious behaviors (e.g., toxic outputs). All in all, we gather these techniques as a uniform holistic framework and showcase its effectiveness and limitations through extensive experiments for compression, circuit discovery and model correction on Llama and OPT models, highlighting its potential for improving both model efficiency and safety. Our code is publicly available at https://github.com/erfanhatefi/SparC3.
Problem

Research questions and friction points this paper is trying to address.

Compress LLMs for memory-constrained environments efficiently
Discover task-relevant subgraphs (circuits) in LLMs
Correct spurious behaviors by removing toxic output circuits
Innovation

Methods, ideas, or system contributions that make the work stand out.

Attribution-guided pruning using Layer-wise Relevance Propagation
Extracting task-relevant subgraphs for core functions
Selectively removing circuits for model correction
🔎 Similar Papers
No similar papers found.
S
Sayed Mohammad Vakilzadeh Hatefi
Department of Artificial Intelligence, Fraunhofer Heinrich-Hertz-Institute
Maximilian Dreyer
Maximilian Dreyer
Explainable AI Group, Fraunhofer Heinrich Hertz Institute
Explainable AI (XAI)InterpretabilityArtificial IntelligenceComputer Vision
R
Reduan Achtibat
Department of Artificial Intelligence, Fraunhofer Heinrich-Hertz-Institute
P
Patrick Kahardipraja
Department of Artificial Intelligence, Fraunhofer Heinrich-Hertz-Institute
Thomas Wiegand
Thomas Wiegand
Department of Electrical Engineering and Computer Science, Technische Universität Berlin; BIFOLD - Berlin Institute for the Foundations of Learning and Data
Wojciech Samek
Wojciech Samek
Professor at TU Berlin, Head of AI Department at Fraunhofer HHI, BIFOLD Fellow
Deep LearningInterpretabilityExplainable AITrustworthy AIFederated Learning
S
S. Lapuschkin
Centre of eXplainable Artificial Intelligence, Technological University Dublin