AIR: Post-training Data Selection for Reasoning via Attention Head Influence

πŸ“… 2025-12-15
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Existing post-training distillation methods struggle to identify the causal importance of individual reasoning steps in multi-step reasoning tasks, resulting in inefficient data selection. Method: This paper proposes AIRβ€”a training-free, unsupervised, mechanism-driven data selection framework that quantifies sample value based on the causal influence of attention heads on each reasoning step. AIR achieves fine-grained evaluation at both the reasoning-step and sample levels through attention head mechanism analysis, reference model ablation construction, and loss-difference measurement. Contribution/Results: By avoiding biases inherent in conventional heuristics (e.g., sequence length, entropy, or loss magnitude), AIR significantly improves accuracy across multiple multi-step reasoning benchmarks. It precisely identifies high-value samples and critical reasoning steps, establishing a novel paradigm for efficient distillation of large language models’ reasoning capabilities.

Technology Category

Application Category

πŸ“ Abstract
LLMs achieve remarkable multi-step reasoning capabilities, yet effectively transferring these skills via post-training distillation remains challenging. Existing data selection methods, ranging from manual curation to heuristics based on length, entropy, or overall loss, fail to capture the causal importance of individual reasoning steps, limiting distillation efficiency. To address this, we propose Attention Influence for Reasoning (AIR), a principled, unsupervised and training-free framework that leverages mechanistic insights of the retrieval head to select high-value post-training data. AIR first identifies reasoning-critical attention heads of an off-the-shelf model, then constructs a weakened reference model with disabled head influence, and finally quantifies the resulting loss divergence as the Attention Influence Score. This score enables fine-grained assessment at both the step and sample levels, supporting step-level weighted fine-tuning and global sample selection. Experiments across multiple reasoning benchmarks show that AIR consistently improves reasoning accuracy, surpassing heuristic baselines and effectively isolating the most critical steps and samples. Our work establishes a mechanism-driven, data-efficient approach for reasoning distillation in LLMs.
Problem

Research questions and friction points this paper is trying to address.

Selecting high-value data for post-training reasoning distillation in LLMs
Capturing causal importance of individual reasoning steps during data selection
Improving reasoning accuracy through mechanism-driven, unsupervised data selection
Innovation

Methods, ideas, or system contributions that make the work stand out.

Leverages reasoning-critical attention heads for data selection
Uses weakened reference model to quantify loss divergence
Enables fine-grained step and sample level assessment
πŸ”Ž Similar Papers
No similar papers found.
J
Jinrui Liu
Beihang University
J
Jeff Wu
Independent Researcher
X
Xuanguang Pan
Beihang University
G
Gavin Cheung
Independent Researcher
S
Shuai Ma
Beihang University
Chongyang Tao
Chongyang Tao
Associate Professor of Computer Science, Beihang University
Natural Language ProcessingDialogue SystemsInformation RetrievalData Intelligence