Task-Aware Exploration via a Predictive Bisimulation Metric

📅 2026-02-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of inefficient exploration in visual reinforcement learning under sparse rewards, where task-irrelevant environmental variations often hinder effective policy learning. To mitigate this issue, the authors propose a novel learning framework that integrates predictive bisimulation metrics with task-relevant representations. Exploration is driven by intrinsic novelty in the latent space, while a predictive reward difference mechanism is introduced to alleviate representation collapse. Furthermore, a potential-based exploration bonus is designed to enable task-aware, efficient exploration. Empirical evaluations on the MetaWorld and Maze2D benchmarks demonstrate that the proposed method significantly outperforms existing approaches, achieving superior exploration efficiency and task performance.

Technology Category

Application Category

📝 Abstract
Accelerating exploration in visual reinforcement learning under sparse rewards remains challenging due to the substantial task-irrelevant variations. Despite advances in intrinsic exploration, many methods either assume access to low-dimensional states or lack task-aware exploration strategies, thereby rendering them fragile in visual domains. To bridge this gap, we present TEB, a Task-aware Exploration approach that tightly couples task-relevant representations with exploration through a predictive Bisimulation metric. Specifically, TEB leverages the metric not only to learn behaviorally grounded task representations but also to measure behaviorally intrinsic novelty over the learned latent space. To realize this, we first theoretically mitigate the representation collapse of degenerate bisimulation metrics under sparse rewards by internally introducing a simple but effective predicted reward differential. Building on this robust metric, we design potential-based exploration bonuses, which measure the relative novelty of adjacent observations over the latent space. Extensive experiments on MetaWorld and Maze2D show that TEB achieves superior exploration ability and outperforms recent baselines.
Problem

Research questions and friction points this paper is trying to address.

visual reinforcement learning
sparse rewards
task-aware exploration
intrinsic exploration
representation collapse
Innovation

Methods, ideas, or system contributions that make the work stand out.

predictive bisimulation
task-aware exploration
representation learning
intrinsic motivation
sparse rewards
D
Dayang Liang
Department of Automation, Xiamen University, Xiamen, China
R
Ruihan Liu
Department of Computer Science, Beijing Normal-Hong Kong Baptist University, Zhuhai, China
Lipeng Wan
Lipeng Wan
Georgia State University
Scientific Data ManagementHPCData-Intensive ComputingStorage and I/OSystem Resilience
Y
Yunlong Liu
Department of Automation, Xiamen University, Xiamen, China
Bo An
Bo An
Nanyang Technological University
Artificial intelligencemulti-agent systemsgame theoryreinforcement learningoptimization