π€ AI Summary
Existing post-training distillation methods struggle to identify the causal importance of individual reasoning steps in multi-step reasoning tasks, resulting in inefficient data selection. Method: This paper proposes AIRβa training-free, unsupervised, mechanism-driven data selection framework that quantifies sample value based on the causal influence of attention heads on each reasoning step. AIR achieves fine-grained evaluation at both the reasoning-step and sample levels through attention head mechanism analysis, reference model ablation construction, and loss-difference measurement. Contribution/Results: By avoiding biases inherent in conventional heuristics (e.g., sequence length, entropy, or loss magnitude), AIR significantly improves accuracy across multiple multi-step reasoning benchmarks. It precisely identifies high-value samples and critical reasoning steps, establishing a novel paradigm for efficient distillation of large language modelsβ reasoning capabilities.
π Abstract
LLMs achieve remarkable multi-step reasoning capabilities, yet effectively transferring these skills via post-training distillation remains challenging. Existing data selection methods, ranging from manual curation to heuristics based on length, entropy, or overall loss, fail to capture the causal importance of individual reasoning steps, limiting distillation efficiency. To address this, we propose Attention Influence for Reasoning (AIR), a principled, unsupervised and training-free framework that leverages mechanistic insights of the retrieval head to select high-value post-training data. AIR first identifies reasoning-critical attention heads of an off-the-shelf model, then constructs a weakened reference model with disabled head influence, and finally quantifies the resulting loss divergence as the Attention Influence Score. This score enables fine-grained assessment at both the step and sample levels, supporting step-level weighted fine-tuning and global sample selection. Experiments across multiple reasoning benchmarks show that AIR consistently improves reasoning accuracy, surpassing heuristic baselines and effectively isolating the most critical steps and samples. Our work establishes a mechanism-driven, data-efficient approach for reasoning distillation in LLMs.