Offline Reinforcement Learning with Discrete Diffusion Skills

📅 2025-03-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In offline reinforcement learning, continuous skill representations struggle to simultaneously ensure interpretability, training stability, and generalization to long-horizon tasks. This paper proposes the first hierarchical offline RL framework based on a discrete skill space, innovatively integrating a Transformer encoder—modeling high-level skill semantics—with a discrete diffusion model decoder—generating interpretable and robust discrete skill sequences. This architecture significantly improves policy interpretability and training stability while enhancing exploration adaptability during online deployment. Evaluated on standard benchmarks, our method achieves ≥12% performance gain on AntMaze-v2, attains state-of-the-art results on Locomotion and Kitchen, and demonstrates superior generalization and robustness—particularly in long-horizon, highly sparse-reward tasks.

Technology Category

Application Category

📝 Abstract
Skills have been introduced to offline reinforcement learning (RL) as temporal abstractions to tackle complex, long-horizon tasks, promoting consistent behavior and enabling meaningful exploration. While skills in offline RL are predominantly modeled within a continuous latent space, the potential of discrete skill spaces remains largely underexplored. In this paper, we propose a compact discrete skill space for offline RL tasks supported by state-of-the-art transformer-based encoder and diffusion-based decoder. Coupled with a high-level policy trained via offline RL techniques, our method establishes a hierarchical RL framework where the trained diffusion decoder plays a pivotal role. Empirical evaluations show that the proposed algorithm, Discrete Diffusion Skill (DDS), is a powerful offline RL method. DDS performs competitively on Locomotion and Kitchen tasks and excels on long-horizon tasks, achieving at least a 12 percent improvement on AntMaze-v2 benchmarks compared to existing offline RL approaches. Furthermore, DDS offers improved interpretability, training stability, and online exploration compared to previous skill-based methods.
Problem

Research questions and friction points this paper is trying to address.

Exploring discrete skill spaces in offline reinforcement learning.
Developing a hierarchical RL framework with diffusion-based skills.
Improving performance and interpretability in long-horizon tasks.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Discrete skill space for offline RL
Transformer encoder and diffusion decoder
Hierarchical RL with diffusion skills
🔎 Similar Papers
No similar papers found.