Safe In-Context Reinforcement Learning

📅 2025-09-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses safety assurance in In-Context Reinforcement Learning (ICRL) for zero-shot task adaptation: how to enable agents to dynamically balance reward maximization against safety cost minimization—without parameter updates—under a given safety budget. We propose a safety-aware ICRL framework grounded in Constrained Markov Decision Processes (CMDPs), which achieves zero-gradient adaptation by extending the policy network’s context input space, and incorporates a cost-aware policy optimization mechanism enabling autonomous switching between aggressive and conservative behaviors. Crucially, we formulate safety as an online behavioral regulation problem subject to an adjustable cost threshold constraint—a novel conceptualization in ICRL. Experiments demonstrate that our method maintains competitive task performance while significantly reducing safety violations; it exhibits high sensitivity and robustness across varying cost budgets, thereby achieving a synergistic trade-off between safety and efficacy.

Technology Category

Application Category

📝 Abstract
In-context reinforcement learning (ICRL) is an emerging RL paradigm where the agent, after some pretraining procedure, is able to adapt to out-of-distribution test tasks without any parameter updates. The agent achieves this by continually expanding the input (i.e., the context) to its policy neural networks. For example, the input could be all the history experience that the agent has access to until the current time step. The agent's performance improves as the input grows, without any parameter updates. In this work, we propose the first method that promotes the safety of ICRL's adaptation process in the framework of constrained Markov Decision Processes. In other words, during the parameter-update-free adaptation process, the agent not only maximizes the reward but also minimizes an additional cost function. We also demonstrate that our agent actively reacts to the threshold (i.e., budget) of the cost tolerance. With a higher cost budget, the agent behaves more aggressively, and with a lower cost budget, the agent behaves more conservatively.
Problem

Research questions and friction points this paper is trying to address.

Ensuring safety during in-context reinforcement learning adaptation without parameter updates
Minimizing cost function while maximizing rewards in constrained Markov Decision Processes
Enabling agents to dynamically adjust behavior based on cost tolerance thresholds
Innovation

Methods, ideas, or system contributions that make the work stand out.

In-context reinforcement learning adapts without parameter updates
Constrained Markov Decision Processes ensure safety during adaptation
Agent adjusts behavior based on cost tolerance thresholds