On the generalization of language models from in-context learning and finetuning: a controlled study

📅 2025-05-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Fine-tuned large language models (LLMs) exhibit weak generalization, particularly on tasks requiring relational inversion and logical reasoning. To systematically investigate this limitation, we construct a knowledge-isolated, controllable dataset to benchmark and contrast the generalization capabilities of in-context learning (ICL) versus fine-tuning. Our analysis reveals that ICL demonstrates superior inductive flexibility—attributable to its reliance on dynamic, example-driven reasoning rather than static parameter updates. Motivated by this insight, we propose Contextual Reasoning Injection (CRI): a method that explicitly incorporates ICL-derived reasoning chains into fine-tuning examples, thereby enabling explicit knowledge transfer of reasoning capabilities into parametric models. Evaluated across multi-dimensional generalization benchmarks—including diverse data splits and reasoning types—CRI consistently enhances fine-tuned model performance. Crucially, our work uncovers a fundamental distinction in inductive bias: ICL leverages adaptive, context-sensitive inference, whereas standard fine-tuning tends to overfit surface-level pattern matching.

Technology Category

Application Category

📝 Abstract
Large language models exhibit exciting capabilities, yet can show surprisingly narrow generalization from finetuning -- from failing to generalize to simple reversals of relations they are trained on, to missing logical deductions that can be made from trained information. These failures to generalize from fine-tuning can hinder practical application of these models. However, language models' in-context learning shows different inductive biases, and can generalize better in some of these cases. Here, we explore these differences in generalization between in-context- and fine-tuning-based learning. To do so, we constructed several novel datasets to evaluate and improve models' ability to generalize from finetuning data. The datasets are constructed to isolate the knowledge in the dataset from that in pretraining, to create clean tests of generalization. We expose pretrained large models to controlled subsets of the information in these datasets -- either in context, or through fine-tuning -- and evaluate their performance on test sets that require various types of generalization. We find overall that in data-matched settings, in-context learning can generalize more flexibly than fine-tuning (though we also find some qualifications of prior findings, such as cases when fine-tuning can generalize to reversals embedded in a larger structure of knowledge). We build on these findings to propose a method to enable improved generalization from fine-tuning: adding in-context inferences to finetuning data. We show that this method improves generalization across various splits of our datasets and other benchmarks. Our results have implications for understanding the inductive biases of different modes of learning in language models, and practically improving their performance.
Problem

Research questions and friction points this paper is trying to address.

Comparing generalization differences between in-context learning and fine-tuning
Evaluating models' ability to generalize from controlled datasets
Improving fine-tuning generalization via in-context inference integration
Innovation

Methods, ideas, or system contributions that make the work stand out.

Comparing in-context learning and fine-tuning generalization
Creating datasets to isolate and test generalization abilities
Enhancing fine-tuning with in-context inferences for better performance
🔎 Similar Papers
No similar papers found.