🤖 AI Summary
This work addresses the challenges of long-horizon structural degradation and error accumulation in model-based reinforcement learning caused by poor state representations. To this end, the authors propose a decision-time hierarchical planning method grounded in Laplacian representations. For the first time, Laplacian eigenmaps are integrated into decision-time planning to construct semantic distances in state space across multiple time scales, effectively preserving long-range structural information and naturally decomposing tasks into subgoals that facilitate local cost estimation. By combining this representation with the hierarchical planning algorithm ALPS and an offline goal-conditioned reinforcement learning framework, the proposed approach significantly outperforms existing baselines on the OGBench benchmark, achieving substantial improvements in planning performance for offline goal-directed tasks.
📝 Abstract
Planning with a learned model remains a key challenge in model-based reinforcement learning (RL). In decision-time planning, state representations are critical as they must support local cost computation while preserving long-horizon structure. In this paper, we show that the Laplacian representation provides an effective latent space for planning by capturing state-space distances at multiple time scales. This representation preserves meaningful distances and naturally decomposes long-horizon problems into subgoals, also mitigating the compounding errors that arise over long prediction horizons. Building on these properties, we introduce ALPS, a hierarchical planning algorithm, and demonstrate that it outperforms commonly used baselines on a selection of offline goal-conditioned RL tasks from OGBench, a benchmark previously dominated by model-free methods.