🤖 AI Summary
Instruction tuning often causes pretrained language models to forget foundational knowledge and over-specialize in conversational patterns, thereby degrading in-context learning (ICL) performance. This work identifies an intrinsic trade-off between instruction-following capability and ICL ability. To address it, we propose *partial adaptation*, a lightweight, parameter-efficient tuning paradigm: leveraging LoRA or Adapter modules, we freeze subsets of model parameters and progressively scale adaptation strength—without additional training or extra parameters. Evaluated across 12 canonical NLP few-shot tasks, our method improves average accuracy by 4.2%, while incurring only a marginal drop (−1.8%) in AlpacaEval instruction-following scores. Notably, this is the first systematic study to characterize performance trajectories across multiple model families and scales. Our approach offers a scalable, low-overhead pathway to balance instruction alignment with generalization—preserving ICL competence without compromising task-specific fidelity.
📝 Abstract
Instruct models, obtained from various instruction tuning or post-training steps, are commonly deemed superior and more usable than their base counterpart. While the model gains instruction following ability, instruction tuning may lead to forgetting the knowledge from pre-training or it may encourage the model being overly conversational or verbose. This, in turn, can lead to degradation of in-context few-shot learning performance. In this work, we study the performance trajectory between base and instruct models by scaling down the strength of instruction-tuning via the partial adaption method. We show that, across several model families and model sizes, reducing the strength of instruction-tuning results in material improvement on a few-shot in-context learning benchmark covering a variety of classic natural language tasks. This comes at the cost of losing some degree of instruction following ability as measured by AlpacaEval. Our study shines light on the potential trade-off between in-context learning and instruction following abilities that is worth considering in practice.