🤖 AI Summary
End-to-end driving policies often fail in closed-loop deployment due to error accumulation stemming from the absence of recovery mechanisms. To address this, we propose a lightweight 3D rasterization-based data augmentation framework that generates counterfactual trajectories and multi-view synthetic images via semantics-preserving geometric rendering—enhancing closed-loop robustness and generalization to long-tail scenarios. We are the first to demonstrate that non-photorealistic rendering suffices for effective end-to-end policy training. Our approach innovatively integrates differentiable 3D rasterization with a novel *raster-to-real* feature-space alignment technique, enabling efficient and scalable simulation-to-reality transfer. The entire pipeline is fully differentiable, supporting end-to-end optimization. Evaluated on four major benchmarks—NAVSIM v1/v2, Waymo Vision-based E2E Driving, and Bench2Drive—our method achieves state-of-the-art performance, with substantial improvements in closed-loop stability and adaptation to complex, rare driving scenarios.
📝 Abstract
Imitation learning for end-to-end driving trains policies only on expert demonstrations. Once deployed in a closed loop, such policies lack recovery data: small mistakes cannot be corrected and quickly compound into failures. A promising direction is to generate alternative viewpoints and trajectories beyond the logged path. Prior work explores photorealistic digital twins via neural rendering or game engines, but these methods are prohibitively slow and costly, and thus mainly used for evaluation. In this work, we argue that photorealism is unnecessary for training end-to-end planners. What matters is semantic fidelity and scalability: driving depends on geometry and dynamics, not textures or lighting. Motivated by this, we propose 3D Rasterization, which replaces costly rendering with lightweight rasterization of annotated primitives, enabling augmentations such as counterfactual recovery maneuvers and cross-agent view synthesis. To transfer these synthetic views effectively to real-world deployment, we introduce a Raster-to-Real feature-space alignment that bridges the sim-to-real gap. Together, these components form Rasterization Augmented Planning (RAP), a scalable data augmentation pipeline for planning. RAP achieves state-of-the-art closed-loop robustness and long-tail generalization, ranking first on four major benchmarks: NAVSIM v1/v2, Waymo Open Dataset Vision-based E2E Driving, and Bench2Drive. Our results show that lightweight rasterization with feature alignment suffices to scale E2E training, offering a practical alternative to photorealistic rendering. Project page: https://alan-lanfeng.github.io/RAP/.