🤖 AI Summary
Existing fine-tuning datasets are predominantly constructed at the sentence level, which misaligns with the token-level optimization mechanism of large language models and introduces token-level noise that degrades downstream performance. To address this issue, this work proposes XTF, a novel framework that enables interpretable token-level noise filtering for the first time. XTF decomposes each token’s contribution to fine-tuning into three interpretable attributes—inference importance, knowledge novelty, and task relevance—assigns a noise score based on these dimensions, and suppresses the gradients of identified noisy tokens. This approach substantially enhances fine-tuning data quality, yielding consistent performance gains across seven mainstream large language models on three challenging domains: mathematics, code generation, and medical reasoning, with average improvements reaching up to 13.7%.
📝 Abstract
Large Language Models (LLMs) have seen remarkable advancements, achieving state-of-the-art results in diverse applications. Fine-tuning, an important step for adapting LLMs to specific downstream tasks, typically involves further training on corresponding datasets. However, a fundamental discrepancy exists between current fine-tuning datasets and the token-level optimization mechanism of LLMs: most datasets are designed at the sentence-level, which introduces token-level noise, causing negative influence to final performance. In this paper, we propose XTF, an explainable token-level noise filtering framework. XTF decomposes the complex and subtle contributions of token-level data to the fine-tuning process into three distinct and explicit attributes (reasoning importance, knowledge novelty, and task relevance), which can be assessed using scoring methods, and then masks the gradients of selected noisy tokens accordingly to optimize the performance of fine-tuned LLMs. We conduct extensive experiments on three representative downstream tasks (math, code and medicine) across 7 mainstream LLMs. The results demonstrate that XTF can significantly improve downstream performance by up to 13.7% compared to regular fine-tuning. Our work highlights the importance of token-level dataset optimization, and demonstrates the potential of strategies based on attribute decomposition for explaining complex training mechanisms.