Decoupling Exploration and Policy Optimization: Uncertainty Guided Tree Search for Hard Exploration

📅 2026-03-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Efficient autonomous exploration in sparse-reward environments remains a fundamental challenge in reinforcement learning. This work proposes a novel paradigm that decouples exploration from policy optimization: during the exploration phase, it abandons conventional reinforcement learning and instead employs a “Go-With-The-Winner” tree search guided by epistemic uncertainty to actively expand state coverage; subsequently, it distills the collected exploration trajectories into a deployable policy via supervised inverse dynamics learning. The approach requires neither expert demonstrations nor domain-specific knowledge and operates directly from end-to-end pixel inputs. It substantially outperforms existing methods on challenging Atari benchmarks such as Montezuma’s Revenge, Pitfall!, and Venture, achieving an order-of-magnitude improvement in exploration efficiency. Notably, it is the first method to solve high-dimensional continuous-control sparse-reward tasks—including MuJoCo Adroit and AntMaze—directly from pixels.

Technology Category

Application Category

📝 Abstract
The process of discovery requires active exploration -- the act of collecting new and informative data. However, efficient autonomous exploration remains a major unsolved problem. The dominant paradigm addresses this challenge by using Reinforcement Learning (RL) to train agents with intrinsic motivation, maximizing a composite objective of extrinsic and intrinsic rewards. We suggest that this approach incurs unnecessary overhead: while policy optimization is necessary for precise task execution, employing such machinery solely to expand state coverage may be inefficient. In this paper, we propose a new paradigm that explicitly separates exploration from exploitation and bypasses RL during the exploration phase. Our method uses a tree-search strategy inspired by the Go-With-The-Winner algorithm, paired with a measure of epistemic uncertainty to systematically drive exploration. By removing the overhead of policy optimization, our approach explores an order of magnitude more efficiently than standard intrinsic motivation baselines on hard Atari benchmarks. Further, we demonstrate that the discovered trajectories can be distilled into deployable policies using existing supervised backward learning algorithms, achieving state-of-the-art scores by a wide margin on Montezuma's Revenge, Pitfall!, and Venture without relying on domain-specific knowledge. Finally, we demonstrate the generality of our framework in high-dimensional continuous action spaces by solving the MuJoCo Adroit dexterous manipulation and AntMaze tasks in a sparse-reward setting, directly from image observations and without expert demonstrations or offline datasets. To the best of our knowledge, this has not been achieved before.
Problem

Research questions and friction points this paper is trying to address.

exploration
reinforcement learning
hard exploration
sparse reward
autonomous exploration
Innovation

Methods, ideas, or system contributions that make the work stand out.

decoupled exploration
uncertainty-guided tree search
epistemic uncertainty
hard exploration
sparse-reward RL
🔎 Similar Papers
No similar papers found.