Episodic Novelty Through Temporal Distance

📅 2025-01-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In sparse-reward Contextual Markov Decision Processes (CMDPs), state comparison is challenging and cross-episode exploration is inefficient. To address this, we propose a time-distance-based state similarity modeling method: for the first time, temporal distance is introduced as a robust, unsupervised intrinsic state metric; contrastive learning is employed to estimate temporal distances, circumventing the limitations of conventional counting-based methods and hand-crafted metrics. This yields a novel intrinsic reward mechanism that drives policies to efficiently identify novel states and enhances cross-task generalization. Evaluated on multiple sparse-reward CMDP benchmarks, our approach significantly outperforms state-of-the-art methods—achieving substantial improvements in both exploration efficiency and final task performance.

Technology Category

Application Category

📝 Abstract
Exploration in sparse reward environments remains a significant challenge in reinforcement learning, particularly in Contextual Markov Decision Processes (CMDPs), where environments differ across episodes. Existing episodic intrinsic motivation methods for CMDPs primarily rely on count-based approaches, which are ineffective in large state spaces, or on similarity-based methods that lack appropriate metrics for state comparison. To address these shortcomings, we propose Episodic Novelty Through Temporal Distance (ETD), a novel approach that introduces temporal distance as a robust metric for state similarity and intrinsic reward computation. By employing contrastive learning, ETD accurately estimates temporal distances and derives intrinsic rewards based on the novelty of states within the current episode. Extensive experiments on various benchmark tasks demonstrate that ETD significantly outperforms state-of-the-art methods, highlighting its effectiveness in enhancing exploration in sparse reward CMDPs.
Problem

Research questions and friction points this paper is trying to address.

Sparse Rewards
Temporal State Changes
Exploration Efficiency
Innovation

Methods, ideas, or system contributions that make the work stand out.

ETD (Episodic Temporal Distance)
Novelty Assessment
Sparse Reward Environments
🔎 Similar Papers
No similar papers found.
Yuhua Jiang
Yuhua Jiang
Tsinghua University
reinforcement learning
Qihan Liu
Qihan Liu
Tsinghua University
Yiqin Yang
Yiqin Yang
Assistant Professor,Institue of Automation,Chinese Academy of Sciences
Reinforcement LearningEmbodied Intelligence
X
Xiaoteng Ma
Tsinghua University
D
Dianyu Zhong
Tsinghua University
H
Hao Hu
Tsinghua University
J
Jun Yang
Tsinghua University
B
Bin Liang
Tsinghua University
B
Bo Xu
The Key Laboratory of Cognition and Decision Intelligence for Complex Systems, Institute of Automation, Chinese Academy of Sciences
C
Chongjie Zhang
Washington University in St. Louis
Qianchuan Zhao
Qianchuan Zhao
Center for Intelligent and Networked Systems, Dept. Automation, Tsinghua University, Beijing, China
Networked and Intelligent Systems