🤖 AI Summary
Large language models (LLMs) suffer from belief bias in active inference, leading to erroneous state tracking, redundant or incorrect actions, and error accumulation—thereby impairing credit assignment and policy optimization in reinforcement learning. To address this, we propose T³ (Trajectory Truncation for Trustworthy Reasoning), the first method introducing a differentiable belief-bias tracking mechanism that dynamically identifies and truncates uninformative reasoning suffixes deviating from ground-truth states, while preserving high-value exploratory prefixes. T³ integrates LLM-based reasoning, probabilistic belief modeling, and trajectory-level control. Evaluated on five complex tasks, it achieves an average 30% performance gain, significantly improves training stability, and reduces inference token consumption by 25%. Our core contributions are: (1) a quantitative framework for measuring belief bias; (2) a trajectory-truncation-based credit recalibration paradigm; and (3) an interpretable, intervention-aware optimization pathway for LLM-driven active inference.
📝 Abstract
Active reasoning requires large language models (LLMs) to interact with external sources and strategically gather information to solve problems. Central to this process is belief tracking: maintaining a coherent understanding of the problem state and the missing information toward the solution. However, due to limited reasoning capabilities, LLM-based agents often suffer from belief deviation: they struggle to correctly model beliefs, lose track of problem states, and fall into uninformative or repetitive actions. Once this happens, errors compound and reinforcement learning (RL) training fails to properly credit the crucial exploratory steps. To address this issue, we propose to track the deviation of model beliefs and develop $mathbf{T^3}$, a simple yet effective method that detects excessive belief deviation and truncates trajectories during training to remove uninformative tails. By preserving credit for informative prefixes, $mathbf{T^3}$ systematically improves policy optimization. Across 5 challenging tasks, $mathbf{T^3}$ consistently enhances training stability, token efficiency, and final performance, achieving up to 30% gains while cutting rollout tokens by roughly 25%. These results highlight belief control as a key principle for developing robust and generalizable LLM-based active reasoners.