🤖 AI Summary
To address the limited generalization of large language models (LLMs) under distribution shift, this paper proposes Test-time Learning for LMs (TLM), a test-time adaptation paradigm that dynamically adapts the model to the target domain using unlabeled test data only. Methodologically, TLM introduces three key innovations: (1) it formulates input perplexity minimization as a self-supervised test-time optimization objective—the first such formulation; (2) it designs an active selection strategy targeting high-perplexity samples to improve update efficiency; and (3) it employs LoRA-based parameter updates to ensure lightweight, stable, and catastrophic-forgetting-resistant adaptation. Evaluated on our newly constructed AdaptEval benchmark, TLM achieves substantial improvements in domain knowledge adaptation—outperforming the base LLM by over 20% on average—while requiring no labeled data, incurring minimal computational overhead, and demonstrating strong cross-domain generalization. This work establishes a novel, practical pathway for efficient and robust test-time adaptation of LLMs.
📝 Abstract
While Large Language Models (LLMs) have exhibited remarkable emergent capabilities through extensive pre-training, they still face critical limitations in generalizing to specialized domains and handling diverse linguistic variations, known as distribution shifts. In this paper, we propose a Test-Time Learning (TTL) paradigm for LLMs, namely TLM, which dynamically adapts LLMs to target domains using only unlabeled test data during testing. Specifically, we first provide empirical evidence and theoretical insights to reveal that more accurate predictions from LLMs can be achieved by minimizing the input perplexity of the unlabeled test data. Based on this insight, we formulate the Test-Time Learning process of LLMs as input perplexity minimization, enabling self-supervised enhancement of LLM performance. Furthermore, we observe that high-perplexity samples tend to be more informative for model optimization. Accordingly, we introduce a Sample Efficient Learning Strategy that actively selects and emphasizes these high-perplexity samples for test-time updates. Lastly, to mitigate catastrophic forgetting and ensure adaptation stability, we adopt Low-Rank Adaptation (LoRA) instead of full-parameter optimization, which allows lightweight model updates while preserving more original knowledge from the model. We introduce the AdaptEval benchmark for TTL and demonstrate through experiments that TLM improves performance by at least 20% compared to original LLMs on domain knowledge adaptation.