π€ AI Summary
Standard fine-tuning often degrades the in-context learning (ICL) capabilities of large language models and struggles to dynamically balance between ICL and in-weight learning (IWL) based on contextual relevance. This work proposes a contrastive context sampling strategy that mixes both similar and random examples during fine-tuning, incorporating multi-level context similarity to jointly train ICL and IWL. It reveals, for the first time, the critical role of structural similarity between context and target input in achieving an effective ICLβIWL trade-off. By leveraging a contrastive mechanism, the approach prevents the model from collapsing into pure copying, pure ICL, or pure IWL modes. Experiments across four large language models and diverse tasks demonstrate that the method consistently preserves stable hybrid reasoning capabilities and avoids mode collapse.
π Abstract
We investigate training strategies that co-develop in-context learning (ICL) and in-weights learning (IWL), and the ability to switch between them based on context relevance. Although current LLMs exhibit both modes, standard task-specific fine-tuning often erodes ICL, motivating IC-Train - fine-tuning with in-context examples. Prior work has shown that emergence of ICL after IC-Train depends on factors such as task diversity and training duration.
In this paper we show that the similarity structure between target inputs and context examples also plays an important role. Random context leads to loss of ICL and IWL dominance, while only similar examples in context causes ICL to degenerate to copying labels without regard to relevance. To address this, we propose a simple Contrastive-Context which enforces two types of contrasts: (1) mix of similar and random examples within a context to evolve a correct form of ICL, and (2) varying grades of similarity across contexts to evolve ICL-IWL mixtures. We present insights on the importance of such contrast with theoretical analysis of a minimal model. We validate with extensive empirical evaluation on four LLMs and several tasks. Diagnostic probes confirm that contrasted contexts yield stable ICL-IWL mixtures, avoiding collapse into pure ICL, IWL, or copying.