HAD: Combining Hierarchical Diffusion with Metric-Decoupled RL for End-to-End Driving

📅 2026-04-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
End-to-end autonomous driving planning faces challenges including difficulty in optimizing candidate trajectories, diffusion processes prone to generating unrealistic trajectories, and reinforcement learning reliance on a single, monolithic reward signal. To address these issues, this work proposes the Hierarchical Autonomous Driving (HAD) framework, which employs a hierarchical diffusion policy for coarse-to-fine trajectory planning, introduces a structure-preserving trajectory expansion mechanism to generate plausible candidates, and devises a Metric-Decoupled Policy Optimization (MDPO) approach to enable structured multi-objective reinforcement learning. Evaluated on NAVSIM and HUGSIM benchmarks, the method achieves state-of-the-art performance with a +2.3 improvement in EPDMS and a +4.9% increase in route completion rate, significantly enhancing both planning performance and stability.
📝 Abstract
End-to-end planning has emerged as a dominant paradigm for autonomous driving, where recent models often adopt a scoring-selection framework to choose trajectories from a large set of candidates, with diffusion-based decoding showing strong promise. However, directly selecting from the entire candidate space remains difficult to optimize, and Gaussian perturbations used in diffusion often introduce unrealistic trajectories that complicate the denoising process. In addition, for training these models, reinforcement learning (RL) has shown promise, but existing end-to-end RL approaches typically rely on a single coupled reward without structured signals, limiting optimization effectiveness. To address these challenges, we propose HAD, an end-to-end planning framework with a Hierarchical Diffusion Policy that decomposes planning into a coarse-to-fine process. To improve trajectory generation, we introduce Structure-Preserved Trajectory Expansion, which produces realistic candidates while maintaining kinematic structure. For policy learning, we develop Metric-Decoupled Policy Optimization (MDPO) to enable structured RL optimization across multiple driving objectives. Extensive experiments show that HAD achieves new state-of-the-art performance on both NAVSIM and HUGSIM, outperforming prior arts by a huge margin: +2.3 EPDMS on NAVSIM and +4.9 Route Completion on HUGSIM.
Problem

Research questions and friction points this paper is trying to address.

end-to-end driving
trajectory selection
diffusion-based decoding
reinforcement learning
reward coupling
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hierarchical Diffusion
Structure-Preserved Trajectory Expansion
Metric-Decoupled Policy Optimization
End-to-End Driving
Reinforcement Learning
🔎 Similar Papers
No similar papers found.