Dual Goal Representations

📅 2025-10-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In goal-conditioned reinforcement learning (GCRL), goal representations are vulnerable to exogenous noise and struggle to simultaneously ensure dynamical consistency and policy sufficiency. To address this, we propose a dual-goal representation framework that models intrinsic dynamics via full-state temporal distances, yielding goal encodings with representation invariance and noise robustness. Crucially, our approach pioneers the use of temporal distance as the core goal representation, theoretically guaranteeing recoverability of optimal policies and naturally supporting offline GCRL. By unifying self-supervised representation learning with explicit environment dynamics modeling, the method is plug-and-play compatible with any GCRL algorithm. Extensive evaluation across 20 state- and pixel-based tasks from OGBench demonstrates consistent and significant improvements in offline goal-reaching performance, validating its generalizability, effectiveness, and interpretability.

Technology Category

Application Category

📝 Abstract
In this work, we introduce dual goal representations for goal-conditioned reinforcement learning (GCRL). A dual goal representation characterizes a state by "the set of temporal distances from all other states"; in other words, it encodes a state through its relations to every other state, measured by temporal distance. This representation provides several appealing theoretical properties. First, it depends only on the intrinsic dynamics of the environment and is invariant to the original state representation. Second, it contains provably sufficient information to recover an optimal goal-reaching policy, while being able to filter out exogenous noise. Based on this concept, we develop a practical goal representation learning method that can be combined with any existing GCRL algorithm. Through diverse experiments on the OGBench task suite, we empirically show that dual goal representations consistently improve offline goal-reaching performance across 20 state- and pixel-based tasks.
Problem

Research questions and friction points this paper is trying to address.

Introduces dual goal representations for goal-conditioned reinforcement learning
Encodes states through temporal distance relations to all other states
Improves offline goal-reaching performance across diverse benchmark tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dual goal representations encode state relations via temporal distances
Method combines with any existing goal-conditioned reinforcement learning
Improves offline goal-reaching performance across diverse tasks
🔎 Similar Papers
No similar papers found.