🤖 AI Summary
To address the high alignment cost, heavy reliance on labeled data, and full-parameter fine-tuning in adapting large language models (LLMs) to downstream tasks, this paper proposes a data-free attention head localization and pruning method. Our approach identifies task-critical attention heads via gradient sensitivity analysis, then applies structured sparse fine-tuning, local parameter freezing, and dynamic update mechanisms—requiring fine-tuning of only ~10% of attention parameters. To our knowledge, this is the first method enabling sensitive head identification without any task-specific data, supporting reusable head identification across datasets and significantly mitigating catastrophic forgetting. Evaluated on three diverse downstream task categories, the method achieves an average 2% performance gain over strong baselines while preserving robust generalization and training stability.
📝 Abstract
Aligning general-purpose large language models (LLMs) to downstream tasks often incurs significant costs, including constructing task-specific instruction pairs and extensive training adjustments. Prior research has explored various avenues to enhance alignment efficiency, primarily through minimal-data training or data-driven activations to identify key attention heads. However, these approaches inherently introduce data dependency, which hinders generalization and reusability. To address this issue and enhance model alignment efficiency, we propose the extit{ extbf{A}ttention extbf{L}ocalization and extbf{P}runing extbf{S}trategy ( extbf{ALPS})}, an efficient algorithm that localizes the most task-sensitive attention heads and prunes by restricting attention training updates to these heads, thereby reducing alignment costs. Experimental results demonstrate that our method activates only extbf{10%} of attention parameters during fine-tuning while achieving a extbf{2%} performance improvement over baselines on three tasks. Moreover, the identified task-specific heads are transferable across datasets and mitigate knowledge forgetting. Our work and findings provide a novel perspective on efficient LLM alignment.