Towards Monotonic Improvement in In-Context Reinforcement Learning

📅 2025-09-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address test-time performance degradation in In-Context Reinforcement Learning (ICRL)—specifically, the “context ambiguity” problem wherein stochastic policy outputs generate suboptimal interaction histories, trapping the agent in poor policy cycles—this paper proposes a **monotonic improvement framework based on Contextual Value Signals (CVS)**. Our key contributions are: (i) a formal definition of contextual value, with a theoretical proof that it tightly lower-bounds optimal policy performance; and (ii) a non-decreasing CVS mechanism that jointly leverages retrospective evaluation and forward prediction to explicitly mitigate context ambiguity. The framework is compatible with both sequence-based models and online ICRL training paradigms. Empirical evaluation on Dark Room and MiniGrid multi-task benchmarks demonstrates significant suppression of performance decay, improved cross-task rapid adaptation, and enhanced overall stability of ICRL agents.

Technology Category

Application Category

📝 Abstract
In-Context Reinforcement Learning (ICRL) has emerged as a promising paradigm for developing agents that can rapidly adapt to new tasks by leveraging past experiences as context, without updating their parameters. Recent approaches train large sequence models on monotonic policy improvement data from online RL, aiming to a continue improved testing time performance. However, our experimental analysis reveals a critical flaw: these models cannot show a continue improvement like the training data during testing time. Theoretically, we identify this phenomenon as Contextual Ambiguity, where the model's own stochastic actions can generate an interaction history that misleadingly resembles that of a sub-optimal policy from the training data, initiating a vicious cycle of poor action selection. To resolve the Contextual Ambiguity, we introduce Context Value into training phase and propose Context Value Informed ICRL (CV-ICRL). CV-ICRL use Context Value as an explicit signal representing the ideal performance theoretically achievable by a policy given the current context. As the context expands, Context Value could include more task-relevant information, and therefore the ideal performance should be non-decreasing. We prove that the Context Value tightens the lower bound on the performance gap relative to an ideal, monotonically improving policy. We fruther propose two methods for estimating Context Value at both training and testing time. Experiments conducted on the Dark Room and Minigrid testbeds demonstrate that CV-ICRL effectively mitigates performance degradation and improves overall ICRL abilities across various tasks and environments. The source code and data of this paper are available at https://github.com/Bluixe/towards_monotonic_improvement .
Problem

Research questions and friction points this paper is trying to address.

Addresses performance degradation in In-Context Reinforcement Learning during testing
Identifies Contextual Ambiguity causing misleading interaction histories in ICRL
Proposes Context Value method to ensure monotonic policy improvement
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introduces Context Value to resolve contextual ambiguity
Proposes Context Value Informed ICRL for training
Estimates Context Value during both training and testing
🔎 Similar Papers
No similar papers found.