🤖 AI Summary
In offline reinforcement learning, continuous skill representations struggle to simultaneously ensure interpretability, training stability, and generalization to long-horizon tasks. This paper proposes the first hierarchical offline RL framework based on a discrete skill space, innovatively integrating a Transformer encoder—modeling high-level skill semantics—with a discrete diffusion model decoder—generating interpretable and robust discrete skill sequences. This architecture significantly improves policy interpretability and training stability while enhancing exploration adaptability during online deployment. Evaluated on standard benchmarks, our method achieves ≥12% performance gain on AntMaze-v2, attains state-of-the-art results on Locomotion and Kitchen, and demonstrates superior generalization and robustness—particularly in long-horizon, highly sparse-reward tasks.
📝 Abstract
Skills have been introduced to offline reinforcement learning (RL) as temporal abstractions to tackle complex, long-horizon tasks, promoting consistent behavior and enabling meaningful exploration. While skills in offline RL are predominantly modeled within a continuous latent space, the potential of discrete skill spaces remains largely underexplored. In this paper, we propose a compact discrete skill space for offline RL tasks supported by state-of-the-art transformer-based encoder and diffusion-based decoder. Coupled with a high-level policy trained via offline RL techniques, our method establishes a hierarchical RL framework where the trained diffusion decoder plays a pivotal role. Empirical evaluations show that the proposed algorithm, Discrete Diffusion Skill (DDS), is a powerful offline RL method. DDS performs competitively on Locomotion and Kitchen tasks and excels on long-horizon tasks, achieving at least a 12 percent improvement on AntMaze-v2 benchmarks compared to existing offline RL approaches. Furthermore, DDS offers improved interpretability, training stability, and online exploration compared to previous skill-based methods.