TRIM: Token-wise Attention-Derived Saliency for Data-Efficient Instruction Tuning

πŸ“… 2025-10-08
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Instruction tuning typically relies on large-scale datasets, while existing coreset methods incur prohibitive computational overhead and neglect fine-grained semantic features. Method: This paper proposes TRIM, a novel framework that introduces a multi-layer attention mechanism to generate token-level, interpretable β€œfingerprints” without backpropagation, enabling efficient sample selection. TRIM precisely captures task-structural sensitivity through forward-pass attention analysis, token saliency evaluation, and representation pattern matching. Contributions/Results: On multiple benchmarks, TRIM selects subsets comprising less than 5% of the original data, achieving an average 9% performance gain over state-of-the-art coreset methods; in several scenarios, it even surpasses full-data fine-tuning. Moreover, TRIM reduces training computational cost by an order of magnitude.

Technology Category

Application Category

πŸ“ Abstract
Instruction tuning is essential for aligning large language models (LLMs) to downstream tasks and commonly relies on large, diverse corpora. However, small, high-quality subsets, known as coresets, can deliver comparable or superior results, though curating them remains challenging. Existing methods often rely on coarse, sample-level signals like gradients, an approach that is computationally expensive and overlooks fine-grained features. To address this, we introduce TRIM (Token Relevance via Interpretable Multi-layer Attention), a forward-only, token-centric framework. Instead of using gradients, TRIM operates by matching underlying representational patterns identified via attention-based "fingerprints" from a handful of target samples. Such an approach makes TRIM highly efficient and uniquely sensitive to the structural features that define a task. Coresets selected by our method consistently outperform state-of-the-art baselines by up to 9% on downstream tasks and even surpass the performance of full-data fine-tuning in some settings. By avoiding expensive backward passes, TRIM achieves this at a fraction of the computational cost. These findings establish TRIM as a scalable and efficient alternative for building high-quality instruction-tuning datasets.
Problem

Research questions and friction points this paper is trying to address.

Selecting small high-quality subsets for efficient instruction tuning of LLMs
Addressing computational expense of gradient-based coreset selection methods
Capturing fine-grained token-level features ignored by sample-level approaches
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses attention-based fingerprints for token selection
Forward-only framework avoids gradient computation
Matches representational patterns from target samples
πŸ”Ž Similar Papers
No similar papers found.