🤖 AI Summary
This study systematically investigates the effectiveness and limitations of test-time adaptation methods that do not require updating model parameters in open-source large language models. Focusing on many-shot in-context learning (ICL), it integrates dynamic and reinforcement-based ICL prompting strategies to evaluate how the number, ordering, and selection mechanisms of examples influence performance across diverse tasks and model architectures. The findings reveal that many-shot prompting substantially improves performance on structured tasks with high information gain but is highly sensitive to example selection, whereas its benefits are limited in open-ended generation tasks. This work delineates the applicability boundaries and potential risks of prompt-based test-time adaptation, offering both theoretical grounding and practical guidance for real-world deployment.
📝 Abstract
Test-time adaptation enables large language models (LLMs) to modify their behavior at inference without updating model parameters. A common approach is many-shot prompting, where large numbers of in-context learning (ICL) examples are injected as an input-space test-time update. Although performance can improve as more demonstrations are added, the reliability and limits of this update mechanism remain poorly understood, particularly for open-source models. We present an empirical study of many-shot prompting across tasks and model backbones, analyzing how performance varies with update magnitude, example ordering, and selection policy. We further study Dynamic and Reinforced ICL as alternative test-time update strategies that control which information is injected and how it constrains model behavior. We find that many-shot prompting is effective for structured tasks where demonstrations provide high information gain, but is highly sensitive to selection strategy and often shows limited benefits for open-ended generation tasks. Overall, we characterize the practical limits of prompt-based test-time adaptation and outline when input-space updates are beneficial versus harmful.