🤖 AI Summary
Current end-to-end autonomous driving motion planners suffer from a modality mismatch—multimodal large language models (MLLMs) pretrained solely in 2D image space exhibit limited self-supervised performance for 3D trajectory planning. Method: We propose a fully self-supervised 3D trajectory generation framework that requires no human annotations. Its core innovation is a novel sparse voxel mapping strategy, which losslessly aligns visual representations to the 3D spatiotemporal space without fine-tuning the vision encoder, while enabling multi-view and multi-frame fusion. Built upon the PaLI architecture, it integrates sparse volumetric representation, self-supervised spatiotemporal alignment, multi-view feature aggregation, and end-to-end trajectory decoding. Results: Our method achieves performance on par with supervised multi-task approaches on nuScenes and Waymo Open Motion, while scaling effectively to massive unlabeled driving logs—significantly improving both 3D trajectory prediction accuracy and generalization.
📝 Abstract
The latest advancements in multi-modal large language models (MLLMs) have spurred a strong renewed interest in end-to-end motion planning approaches for autonomous driving. Many end-to-end approaches rely on human annotations to learn intermediate perception and prediction tasks, while purely self-supervised approaches--which directly learn from sensor inputs to generate planning trajectories without human annotations often underperform the state of the art. We observe a key gap in the input representation space: end-to-end approaches built on MLLMs are often pretrained with reasoning tasks in 2D image space rather than the native 3D space in which autonomous vehicles plan. To this end, we propose S4-Driver, a scalable self-supervised motion planning algorithm with spatio-temporal visual representation, based on the popular PaLI multimodal large language model. S4-Driver uses a novel sparse volume strategy to seamlessly transform the strong visual representation of MLLMs from perspective view to 3D space without the need to finetune the vision encoder. This representation aggregates multi-view and multi-frame visual inputs and enables better prediction of planning trajectories in 3D space. To validate our method, we run experiments on both nuScenes and Waymo Open Motion Dataset (with in-house camera data). Results show that S4-Driver performs favorably against existing supervised multi-task approaches while requiring no human annotations. It also demonstrates great scalability when pretrained on large volumes of unannotated driving logs.