🤖 AI Summary
In delayed reinforcement learning, perceptual latency induces state estimation errors that accumulate recursively over time. To address this, we propose a direct belief prediction paradigm that bypasses conventional recursive state inference and instead performs end-to-end estimation of the environment’s belief state from historical observations—thereby eliminating error propagation at its source. We theoretically establish that this paradigm provides stronger performance guarantees and enables multi-step bootstrapping to accelerate policy learning. To operationalize it, we introduce the Directly Forecasting Belief Transformer (DFBT), which integrates Transformer-based sequence modeling with probabilistic belief estimation, specifically tailored for offline RL. Empirical evaluation shows that DFBT significantly reduces belief prediction error on D4RL benchmarks and substantially outperforms state-of-the-art methods on MuJoCo tasks, demonstrating its superior belief modeling accuracy and efficient policy learning capability.
📝 Abstract
Reinforcement learning (RL) with delays is challenging as sensory perceptions lag behind the actual events: the RL agent needs to estimate the real state of its environment based on past observations. State-of-the-art (SOTA) methods typically employ recursive, step-by-step forecasting of states. This can cause the accumulation of compounding errors. To tackle this problem, our novel belief estimation method, named Directly Forecasting Belief Transformer (DFBT), directly forecasts states from observations without incrementally estimating intermediate states step-by-step. We theoretically demonstrate that DFBT greatly reduces compounding errors of existing recursively forecasting methods, yielding stronger performance guarantees. In experiments with D4RL offline datasets, DFBT reduces compounding errors with remarkable prediction accuracy. DFBT's capability to forecast state sequences also facilitates multi-step bootstrapping, thus greatly improving learning efficiency. On the MuJoCo benchmark, our DFBT-based method substantially outperforms SOTA baselines.