On Information Self-Locking in Reinforcement Learning for Active Reasoning of LLM agents

📅 2026-03-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses a critical limitation in large language model (LLM) agents engaged in active reasoning: during reinforcement learning training, they often fall into “informational self-locking,” wherein they cease to ask informative questions and fail to integrate existing knowledge, leading to degraded exploration and a detrimental feedback loop. We formally characterize this phenomenon for the first time and attribute it to the joint failure of two core capabilities—action selection and belief tracking. To mitigate this, we propose injecting readily obtainable directional critique signals that decouple and rebalance the learning signals for these components, enabling their synergistic optimization. Experiments across seven datasets demonstrate that our approach substantially alleviates informational self-locking, yielding performance improvements of up to 60%.

Technology Category

Application Category

📝 Abstract
Reinforcement learning (RL) with outcome-based rewards has achieved significant success in training large language model (LLM) agents for complex reasoning tasks. However, in active reasoning where agents need to strategically ask questions to acquire task-relevant information, we find that LLM agents trained with RL often suffer from information self-locking: the agent ceases to ask informative questions and struggles to internalize already-obtained information. To understand the phenomenon, we decompose active reasoning into two core capabilities: Action Selection (AS), which determines the observation stream through queries, and Belief Tracking (BT), which updates the agent's belief based on collected evidence. We show that deficient AS and BT capabilities will limit the information exploration during RL training. Furthermore, insufficient exploration in turn hinders the improvement of AS and BT, creating a feedback loop that locks the agent in a low-information regime. To resolve the issue, we propose a simple yet effective approach that reallocates the learning signal by injecting easy- to-obtain directional critiques to help the agent escape self-locking. Extensive experiments with 7 datasets show that our approach significantly mitigates the information self-locking, bringing up to 60% improvements.
Problem

Research questions and friction points this paper is trying to address.

information self-locking
reinforcement learning
active reasoning
large language model agents
belief tracking
Innovation

Methods, ideas, or system contributions that make the work stand out.

information self-locking
active reasoning
reinforcement learning
large language models
belief tracking
🔎 Similar Papers
No similar papers found.