You Only Fine-tune Once: Many-Shot In-Context Fine-Tuning for Large Language Model

📅 2025-06-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Medium-scale LLMs (e.g., Mistral-7B, Gemma-7B, Llama-3-8B) support multi-task in-context learning (ICL), yet single-shot few-shot fine-tuning still lags significantly behind task-specific fine-tuning and suffers from catastrophic forgetting. Method: We propose ManyICL—the first approach to model the entire multi-example context as an autoregressive supervision signal, treating each example’s answer as a training target, thereby enabling single-pass fine-tuning for multi-task adaptation. ManyICL integrates efficient long-context modeling with a unified multi-task in-context fine-tuning framework. Contribution/Results: Evaluated across five diverse task families—classification, summarization, question answering, natural language inference, and mathematical reasoning—ManyICL substantially outperforms zero- and few-shot baselines, approaches the performance of task-specific fine-tuning, exhibits strong generalization stability, and reduces forgetting by a significant margin.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) possess a remarkable ability to perform in-context learning (ICL), which enables them to handle multiple downstream tasks simultaneously without requiring task-specific fine-tuning. Recent studies have shown that even moderately sized LLMs, such as Mistral 7B, Gemma 7B and Llama-3 8B, can achieve ICL through few-shot in-context fine-tuning of all tasks at once. However, this approach still lags behind dedicated fine-tuning, where a separate model is trained for each individual task. In this paper, we propose a novel approach, Many-Shot In-Context Fine-tuning (ManyICL), which significantly narrows this performance gap by extending the principles of ICL to a many-shot setting. To unlock the full potential of ManyICL and address the inherent inefficiency of processing long sequences with numerous in-context examples, we propose a novel training objective. Instead of solely predicting the final answer, our approach treats every answer within the context as a supervised training target. This effectively shifts the role of many-shot examples from prompts to targets for autoregressive learning. Through extensive experiments on diverse downstream tasks, including classification, summarization, question answering, natural language inference, and math, we demonstrate that ManyICL substantially outperforms zero/few-shot fine-tuning and approaches the performance of dedicated fine-tuning. Furthermore, ManyICL significantly mitigates catastrophic forgetting issues observed in zero/few-shot fine-tuning. The code will be made publicly available upon publication.
Problem

Research questions and friction points this paper is trying to address.

Narrowing performance gap between in-context learning and dedicated fine-tuning
Improving efficiency of processing long sequences with many examples
Mitigating catastrophic forgetting in zero/few-shot fine-tuning scenarios
Innovation

Methods, ideas, or system contributions that make the work stand out.

Many-Shot In-Context Fine-tuning (ManyICL)
Treats every in-context answer as supervised target
Reduces performance gap with dedicated fine-tuning
🔎 Similar Papers
No similar papers found.
W
Wenchong He
University of Florida
Liqian Peng
Liqian Peng
Google
LLMModel ReductionPhysics-Informed ML
Z
Zhe Jiang
University of Florida
A
Alex Go
Google