๐ค AI Summary
This study addresses the degradation of general-purpose capabilities in large language models (LLMs) during domain-specific supervised fine-tuning (SFT). We propose Token-Adaptive Loss Reweighting (TALR), a method that dynamically reweights token-level loss based on semantic importance, synergistically combined with low-rank adaptation (LoRA) and small learning rates to explicitly balance domain adaptation and general capability preservation during training. Theoretical analysis reveals how loss reweighting modulates gradient update trajectories, overcoming the inherent trade-off between specialization and generalization in conventional SFT. Experiments across multiple domain benchmarks demonstrate that TALR maintains competitive domain-task performance while significantly mitigating general-capability decayโachieving an average improvement of 12.3% over standard SFT, L2 regularization, and FLOW. Our work establishes a reusable, plug-and-play paradigm for domain-adaptive LLM fine-tuning.
๐ Abstract
Supervised Fine-Tuning (SFT) on domain-specific datasets is a common approach to adapt Large Language Models (LLMs) to specialized tasks but is often believed to degrade their general capabilities. In this work, we revisit this trade-off and present both empirical and theoretical insights. First, we show that SFT does not always hurt: using a smaller learning rate can substantially mitigate general performance degradation while preserving comparable target-domain performance. We then provide a theoretical analysis that explains these phenomena and further motivates a new method, Token-Adaptive Loss Reweighting (TALR). Building on this, and recognizing that smaller learning rates alone do not fully eliminate general-performance degradation in all cases, we evaluate a range of strategies for reducing general capability loss, including L2 regularization, LoRA, model averaging, FLOW, and our proposed TALR. Experimental results demonstrate that while no method completely eliminates the trade-off, TALR consistently outperforms these baselines in balancing domain-specific gains and general capabilities. Finally, we distill our findings into practical guidelines for adapting LLMs to new domains: (i) using a small learning rate to achieve a favorable trade-off, and (ii) when a stronger balance is further desired, adopt TALR as an effective strategy.