Alignment-Constrained Dynamic Pruning for LLMs: Identifying and Preserving Alignment-Critical Circuits

📅 2025-11-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) incur substantial computational overhead during inference; while dynamic pruning improves efficiency, it exacerbates alignment degradation by retaining only input-dependent safety-critical circuits. This work proposes an alignment-aware dynamic structured pruning framework—Probe Pruning—that integrates alignment-sensitivity probes into the pruning process to adaptively identify and preserve neural pathways critical to alignment behavior across diverse inputs. Unlike conventional methods, Probe Pruning explicitly prioritizes circuits whose activation patterns correlate with safe, aligned outputs, thereby mitigating performance decay in safety-critical tasks. Evaluated on LLaMA, Qwen, and Gemma, the method achieves a 50% improvement in refusal capability against harmful requests at equivalent computational cost, demonstrating a principled trade-off between inference acceleration and alignment preservation.

Technology Category

Application Category

📝 Abstract
Large Language Models require substantial computational resources for inference, posing deployment challenges. While dynamic pruning offers superior efficiency over static methods through adaptive circuit selection, it exacerbates alignment degradation by retaining only input-dependent safety-critical circuit preservation across diverse inputs. As a result, addressing these heightened alignment vulnerabilities remains critical. We introduce Alignment-Aware Probe Pruning (AAPP), a dynamic structured pruning method that adaptively preserves alignment-relevant circuits during inference, building upon Probe Pruning. Experiments on LLaMA 2-7B, Qwen2.5-14B-Instruct, and Gemma-3-12B-IT show AAPP improves refusal rates by 50% at matched compute, enabling efficient yet safety-preserving LLM deployment.
Problem

Research questions and friction points this paper is trying to address.

Dynamic pruning worsens alignment degradation in large language models
Preserving safety-critical circuits across diverse inputs remains challenging
Current methods fail to maintain alignment while improving efficiency
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dynamic pruning preserves alignment-critical circuits adaptively
Method improves refusal rates by 50% at matched compute
Enables efficient yet safety-preserving LLM deployment
🔎 Similar Papers