Learning to Learn-at-Test-Time: Language Agents with Learnable Adaptation Policies

📅 2026-04-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitation of existing test-time learning (TTL) methods, which rely on handcrafted, fixed adaptation strategies that are difficult to optimize for downstream tasks. To overcome this, the authors propose Meta-TTL, a novel framework that formalizes adaptation strategy learning as an optimizable objective through a bilevel optimization formulation: an inner loop performs standard test-time learning, while an outer loop employs evolutionary search to automatically optimize the adaptation strategy across a diverse set of training tasks. By integrating language agents with evolutionary mechanisms, Meta-TTL enables autonomous acquisition of task-specific adaptation strategies directly from environmental interaction, substantially enhancing generalization. Empirical results on the Jericho and WebArena-Lite benchmarks demonstrate consistent and significant improvements over manually designed baselines, both in-distribution and out-of-distribution.
📝 Abstract
Test-Time Learning (TTL) enables language agents to iteratively refine their performance through repeated interactions with the environment at inference time. At the core of TTL is an adaptation policy that updates the actor policy based on experience from previous episodes, thereby improving future behavior. Existing methods rely on fixed, hand-crafted adaptation policies rather than optimizing them for downstream improvement. We argue that optimal adaptation policies should be learned from task environments, not hand-engineered based on human intuition. To achieve this, we introduce Meta-TTL, a framework that formulates the discovery of effective adaptation policies as a bi-level optimization problem. Within this framework, the inner loop executes the standard TTL process, measuring how effectively a candidate adaptation policy helps an agent correct errors across sequential episodes. Guided by the agent's performance, the outer loop employs evolutionary search over a diverse distribution of training tasks to iteratively refine the adaptation policy. We evaluate Meta-TTL on Jericho and WebArena-Lite across both in-distribution (ID) and out-of-distribution (OOD) settings, using multiple meta-agent backbones. Results on both benchmarks show that Meta-TTL consistently outperforms hand-crafted baselines, suggesting that the optimized adaptation policy encodes transferable strategies that generalize beyond the training task distribution.
Problem

Research questions and friction points this paper is trying to address.

Test-Time Learning
Adaptation Policy
Language Agents
Meta-Learning
Bi-level Optimization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Test-Time Learning
Adaptation Policy
Bi-level Optimization
Evolutionary Search
Language Agents
🔎 Similar Papers
No similar papers found.