๐ค AI Summary
To address the explosive growth in memory and computational overhead caused by context accumulation in long-horizon decision-making, this paper proposes the Language-based Belief Bottleneck (LBB) framework. LBB replaces the full interaction history with a compact, natural-language belief state, updated via Bayesian-style prior-to-posterior language inference for effective state compression. It introduces the first joint modeling of belief quality scoring and compression reward within the RL post-training objective, augmented with a length penalty to balance task performance and representational succinctness. Evaluated on six multi-step tasks, LBB achieves near-constant memory scaling, yielding interpretable and verifiable belief states. After RL optimization, it surpasses full-context baselines in performance while significantly reducing memory footprint compared to existing methodsโmarking the first approach that simultaneously achieves interpretability, low computational overhead, and high decision-making efficacy for long-range sequential reasoning.
๐ Abstract
As the length of sequential decision-making tasks increases, it becomes computationally impractical to keep full interaction histories in context. We introduce a general framework for LLM agents to maintain concise contexts through multi-step interaction: Acting through Belief Bottlenecks Expressed in Language (ABBEL), and methods to further improve ABBEL agents with RL post-training. ABBEL replaces long multi-step interaction history by a belief state, i.e., a natural language summary of what has been discovered about task-relevant unknowns. Under ABBEL, at each step the agent first updates a prior belief with the most recent observation from the environment to form a posterior belief, then uses only the posterior to select an action. We systematically evaluate frontier models under ABBEL across six diverse multi-step environments, finding that ABBEL supports generating interpretable beliefs while maintaining near-constant memory use over interaction steps. However, bottleneck approaches are generally prone to error propagation, which we observe causing inferior performance when compared to the full context setting due to errors in belief updating. Therefore, we train LLMs to generate and act on beliefs within the ABBEL framework via reinforcement learning (RL). We experiment with belief grading, to reward higher quality beliefs, as well as belief length penalties to reward more compressed beliefs. Our experiments demonstrate the ability of RL to improve ABBEL's performance beyond the full context setting, while using less memory than contemporaneous approaches.