🤖 AI Summary
Medium-scale LLMs (e.g., Mistral-7B, Gemma-7B, Llama-3-8B) support multi-task in-context learning (ICL), yet single-shot few-shot fine-tuning still lags significantly behind task-specific fine-tuning and suffers from catastrophic forgetting.
Method: We propose ManyICL—the first approach to model the entire multi-example context as an autoregressive supervision signal, treating each example’s answer as a training target, thereby enabling single-pass fine-tuning for multi-task adaptation. ManyICL integrates efficient long-context modeling with a unified multi-task in-context fine-tuning framework.
Contribution/Results: Evaluated across five diverse task families—classification, summarization, question answering, natural language inference, and mathematical reasoning—ManyICL substantially outperforms zero- and few-shot baselines, approaches the performance of task-specific fine-tuning, exhibits strong generalization stability, and reduces forgetting by a significant margin.
📝 Abstract
Large language models (LLMs) possess a remarkable ability to perform in-context learning (ICL), which enables them to handle multiple downstream tasks simultaneously without requiring task-specific fine-tuning. Recent studies have shown that even moderately sized LLMs, such as Mistral 7B, Gemma 7B and Llama-3 8B, can achieve ICL through few-shot in-context fine-tuning of all tasks at once. However, this approach still lags behind dedicated fine-tuning, where a separate model is trained for each individual task. In this paper, we propose a novel approach, Many-Shot In-Context Fine-tuning (ManyICL), which significantly narrows this performance gap by extending the principles of ICL to a many-shot setting. To unlock the full potential of ManyICL and address the inherent inefficiency of processing long sequences with numerous in-context examples, we propose a novel training objective. Instead of solely predicting the final answer, our approach treats every answer within the context as a supervised training target. This effectively shifts the role of many-shot examples from prompts to targets for autoregressive learning. Through extensive experiments on diverse downstream tasks, including classification, summarization, question answering, natural language inference, and math, we demonstrate that ManyICL substantially outperforms zero/few-shot fine-tuning and approaches the performance of dedicated fine-tuning. Furthermore, ManyICL significantly mitigates catastrophic forgetting issues observed in zero/few-shot fine-tuning. The code will be made publicly available upon publication.