ZTRS: Zero-Imitation End-to-end Autonomous Driving with Trajectory Scoring

📅 2025-10-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing end-to-end autonomous driving approaches predominantly rely on imitation learning (IL), rendering them vulnerable to suboptimal expert demonstrations and covariate shift; while reinforcement learning (RL) offers greater robustness, it has long been constrained by low-dimensional symbolic inputs, hindering pure end-to-end training from raw sensor data. This paper introduces Exhaustive Policy Optimization (EPO), the first fully IL-free, reward-driven end-to-end framework that operates directly on high-dimensional sensor inputs. EPO integrates offline RL with a policy gradient variant tailored to enumerable action spaces, thereby bypassing error propagation from perception modules and enabling direct mapping from raw observations to trajectory outputs. Evaluated on three benchmarks—Navtest, Navhard, and HUGSIM—EPO achieves significant performance gains over prior methods: it establishes new state-of-the-art (SOTA) on Navhard and outperforms mainstream IL-based baselines on HUGSIM.

Technology Category

Application Category

📝 Abstract
End-to-end autonomous driving maps raw sensor inputs directly into ego-vehicle trajectories to avoid cascading errors from perception modules and to leverage rich semantic cues. Existing frameworks largely rely on Imitation Learning (IL), which can be limited by sub-optimal expert demonstrations and covariate shift during deployment. On the other hand, Reinforcement Learning (RL) has recently shown potential in scaling up with simulations, but is typically confined to low-dimensional symbolic inputs (e.g. 3D objects and maps), falling short of full end-to-end learning from raw sensor data. We introduce ZTRS (Zero-Imitation End-to-End Autonomous Driving with Trajectory Scoring), a framework that combines the strengths of both worlds: sensor inputs without losing information and RL training for robust planning. To the best of our knowledge, ZTRS is the first framework that eliminates IL entirely by only learning from rewards while operating directly on high-dimensional sensor data. ZTRS utilizes offline reinforcement learning with our proposed Exhaustive Policy Optimization (EPO), a variant of policy gradient tailored for enumerable actions and rewards. ZTRS demonstrates strong performance across three benchmarks: Navtest (generic real-world open-loop planning), Navhard (open-loop planning in challenging real-world and synthetic scenarios), and HUGSIM (simulated closed-loop driving). Specifically, ZTRS achieves the state-of-the-art result on Navhard and outperforms IL-based baselines on HUGSIM. Code will be available at https://github.com/woxihuanjiangguo/ZTRS.
Problem

Research questions and friction points this paper is trying to address.

Eliminates imitation learning dependency in autonomous driving
Enables reinforcement learning with raw sensor inputs
Addresses covariate shift and suboptimal expert demonstration limitations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses offline reinforcement learning for robust planning
Eliminates imitation learning by learning from rewards
Operates directly on high-dimensional raw sensor data
🔎 Similar Papers
No similar papers found.