🤖 AI Summary
This work proposes Hybrid TD3, a novel reinforcement learning algorithm designed to address overestimation bias and training instability in discrete-continuous hybrid action spaces. Building upon the TD3 framework, the study provides the first formal analysis of Q-value overestimation bias in such hybrid settings and establishes an ordering of bias magnitudes across five algorithmic variants. To mitigate these issues, Hybrid TD3 introduces a weighted-clipped Q-learning objective combined with a marginalization mechanism over the discrete action distribution, enabling joint optimization of high-level decision-making and low-level control. Experimental results across multiple robotic manipulation tasks demonstrate that Hybrid TD3 significantly enhances training stability and achieves superior performance compared to existing hybrid-action baselines, particularly in high-dimensional action spaces and under domain randomization.
📝 Abstract
Reinforcement learning in discrete-continuous hybrid action spaces presents fundamental challenges for robotic manipulation, where high-level task decisions and low-level joint-space execution must be jointly optimized. Existing approaches either discretize continuous components or relax discrete choices into continuous approximations, which suffer from scalability limitations and training instability in high-dimensional action spaces and under domain randomization. In this paper, we propose Hybrid TD3, an extension of Twin Delayed Deep Deterministic Policy Gradient (TD3) that natively handles parameterized hybrid action spaces in a principled manner. We conduct a rigorous theoretical analysis of overestimation bias in hybrid action settings, deriving formal bounds under twin-critic architectures and establishing a complete bias ordering across five algorithmic variants. Building on this analysis, we introduce a weighted clipped Q-learning target that marginalizes over the discrete action distribution, achieving equivalent bias reduction to standard clipped minimization while improving policy smoothness. Experimental results demonstrate that Hybrid TD3 achieves superior training stability and competitive performance against state-of-the-art hybrid action baselines