🤖 AI Summary
Large language models (LLMs) incur substantial computational overhead during inference; while dynamic pruning improves efficiency, it exacerbates alignment degradation by retaining only input-dependent safety-critical circuits. This work proposes an alignment-aware dynamic structured pruning framework—Probe Pruning—that integrates alignment-sensitivity probes into the pruning process to adaptively identify and preserve neural pathways critical to alignment behavior across diverse inputs. Unlike conventional methods, Probe Pruning explicitly prioritizes circuits whose activation patterns correlate with safe, aligned outputs, thereby mitigating performance decay in safety-critical tasks. Evaluated on LLaMA, Qwen, and Gemma, the method achieves a 50% improvement in refusal capability against harmful requests at equivalent computational cost, demonstrating a principled trade-off between inference acceleration and alignment preservation.
📝 Abstract
Large Language Models require substantial computational resources for inference, posing deployment challenges. While dynamic pruning offers superior efficiency over static methods through adaptive circuit selection, it exacerbates alignment degradation by retaining only input-dependent safety-critical circuit preservation across diverse inputs. As a result, addressing these heightened alignment vulnerabilities remains critical. We introduce Alignment-Aware Probe Pruning (AAPP), a dynamic structured pruning method that adaptively preserves alignment-relevant circuits during inference, building upon Probe Pruning. Experiments on LLaMA 2-7B, Qwen2.5-14B-Instruct, and Gemma-3-12B-IT show AAPP improves refusal rates by 50% at matched compute, enabling efficient yet safety-preserving LLM deployment.