๐ค AI Summary
This work investigates the underlying mechanism behind the strong correlation between the number of in-context examples and prediction performance in in-context learning (ICL). We propose the first Bayesian-theoretic ICL scaling law, modeling ICL as approximate Bayesian inference and explicitly characterizing the quantitative interplay among task prior, learning efficiency, and per-example information contribution. We validate the theory through extensive experiments: (i) empirical evaluation on GPT-2 variants; (ii) controlled synthetic-data studies; (iii) comparisons with supervised fine-tuning (SFT); and (iv) multi-example jailbreaking tests. Our law significantly outperforms existing scaling models in predictive accuracy; precisely forecasts the critical number of examples required for ICL-induced performance recovery post-SFT; and reveals an inherent limitation of post-training alignmentโnamely, its inability to fully suppress unsafe behaviors under multi-example prompting. Collectively, this work establishes a unified theoretical framework for interpreting ICL behavior, assessing safety risks, and analyzing alignment mechanisms.
๐ Abstract
In-context learning (ICL) is a powerful technique for getting language models to perform complex tasks with no training updates. Prior work has established strong correlations between the number of in-context examples provided and the accuracy of the model's predictions. In this paper, we seek to explain this correlation by showing that ICL approximates a Bayesian learner. This perspective gives rise to a family of novel Bayesian scaling laws for ICL. In experiments with mbox{GPT-2} models of different sizes, our scaling laws exceed or match existing scaling laws in accuracy while also offering interpretable terms for task priors, learning efficiency, and per-example probabilities. To illustrate the analytic power that such interpretable scaling laws provide, we report on controlled synthetic dataset experiments designed to inform real-world studies of safety alignment. In our experimental protocol, we use SFT to suppress an unwanted existing model capability and then use ICL to try to bring that capability back (many-shot jailbreaking). We then experiment on real-world instruction-tuned LLMs using capabilities benchmarks as well as a new many-shot jailbreaking dataset. In all cases, Bayesian scaling laws accurately predict the conditions under which ICL will cause the suppressed behavior to reemerge, which sheds light on the ineffectiveness of post-training at increasing LLM safety.