🤖 AI Summary
To address insufficient robustness of end-to-end autonomous driving in long-tailed driving scenarios, this paper proposes the first Vision–Language–Trajectory (VLT) joint modeling paradigm, instantiated as a 3B-parameter multimodal foundation model. Methodologically: (1) self-supervised pretraining is conducted on CoVLA and Waymo datasets (94 hours), leveraging next-token prediction to drive cross-modal alignment; (2) a lightweight Group Relative Policy Optimization (GRPO) algorithm enables efficient reinforcement fine-tuning using fewer than 500 frames of human preference annotations; (3) a 72B vision-language model (VLM) is employed for automated language annotation, significantly reducing manual labeling effort. Experiments demonstrate state-of-the-art performance: the model achieves an RFS score of 7.99 on the official Waymo Open Dataset test set—winning the 2025 Waymo Vision-Based End-to-End Challenge—and attains 8.12 on the Poutine-Base validation set, approaching expert-level ground-truth performance.
📝 Abstract
We present Poutine, a 3B-parameter vision-language model (VLM) tailored for end-to-end autonomous driving in long-tail driving scenarios. Poutine is trained in two stages. To obtain strong base driving capabilities, we train Poutine-Base in a self-supervised vision-language-trajectory (VLT) next-token prediction fashion on 83 hours of CoVLA nominal driving and 11 hours of Waymo long-tail driving. Accompanying language annotations are auto-generated with a 72B-parameter VLM. Poutine is obtained by fine-tuning Poutine-Base with Group Relative Policy Optimization (GRPO) using less than 500 preference-labeled frames from the Waymo validation set. We show that both VLT pretraining and RL fine-tuning are critical to attain strong driving performance in the long-tail. Poutine-Base achieves a rater-feedback score (RFS) of 8.12 on the validation set, nearly matching Waymo's expert ground-truth RFS. The final Poutine model achieves an RFS of 7.99 on the official Waymo test set, placing 1st in the 2025 Waymo Vision-Based End-to-End Driving Challenge by a significant margin. These results highlight the promise of scalable VLT pre-training and lightweight RL fine-tuning to enable robust and generalizable autonomy.