On the Effect of Instruction Tuning Loss on Generalization

📅 2025-07-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work identifies a fundamental design flaw in the autoregressive loss function used in instruction tuning: the conventional practice of computing loss solely over response tokens is suboptimal. To address this, we propose Weighted Instruction Tuning (WIT), which assigns token-level weights—either learnable or heuristically determined—to differentially scale the loss contributions from prompt and response segments. Extensive experiments across multiple large language models (LLaMA, Qwen), instruction datasets (Alpaca, Self-Instruct), and evaluation benchmarks (MT-Bench, AlpacaEval, out-of-distribution robustness tests) demonstrate that assigning moderate prompt weights (0.1–0.5) and higher response weights (0.5–0.9) consistently improves generalization, task-specific performance, and compatibility with subsequent alignment methods (e.g., DPO, RLHF). These results underscore the critical role of fine-grained, weighted loss modeling in enhancing instruction tuning efficacy.

Technology Category

Application Category

📝 Abstract
Instruction Tuning has emerged as a pivotal post-training paradigm that enables pre-trained language models to better follow user instructions. Despite its significance, little attention has been given to optimizing the loss function used. A fundamental, yet often overlooked, question is whether the conventional auto-regressive objective - where loss is computed only on response tokens, excluding prompt tokens - is truly optimal for instruction tuning. In this work, we systematically investigate the impact of differentially weighting prompt and response tokens in instruction tuning loss, and propose Weighted Instruction Tuning (WIT) as a better alternative to conventional instruction tuning. Through extensive experiments on five language models of different families and scale, three finetuning datasets of different sizes, and five diverse evaluation benchmarks, we show that the standard instruction tuning loss often yields suboptimal performance and limited robustness to input prompt variations. We find that a low-to-moderate weight for prompt tokens coupled with a moderate-to-high weight for response tokens yields the best-performing models across settings and also serve as better starting points for the subsequent preference alignment training. These findings highlight the need to reconsider instruction tuning loss and offer actionable insights for developing more robust and generalizable models. Our code is open-sourced at https://github.com/kowndinya-renduchintala/WIT.
Problem

Research questions and friction points this paper is trying to address.

Optimizing loss function in instruction tuning
Impact of weighting prompt and response tokens
Improving model robustness and generalization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Weighted Instruction Tuning optimizes loss function
Differentially weights prompt and response tokens
Improves robustness and generalization in models