See Less, Drive Better: Generalizable End-to-End Autonomous Driving via Foundation Models Stochastic Patch Selection

πŸ“… 2026-01-15
πŸ“ˆ Citations: 1
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the limited out-of-distribution (OOD) generalization of end-to-end autonomous driving policies, which stems from the high redundancy in patch-level features extracted by foundation models, leading policies to overfit spurious correlations. For the first time, this study identifies and quantifies such redundancy and introduces Stochastic Patch Selection (SPS), a mechanism that randomly masks a subset of image patch features per frame while preserving spatial layout, thereby compelling the policy to learn invariant decision cues from diverse yet complete scene views. Leveraging features from foundation models such as BLIP2, combined with PCA and cross-patch similarity analysis, SPS improves OOD performance by 6.2% on averageβ€”and up to 20.4%β€”in closed-loop simulation, accelerates inference by 2.4Γ—, and enables deployment on real vehicles without fine-tuning, outperforming existing state-of-the-art methods in eight out of nine ablation settings.

Technology Category

Application Category

πŸ“ Abstract
Recent advances in end-to-end autonomous driving show that policies trained on patch-aligned features extracted from foundation models generalize better to Out-of-Distribution (OOD). We hypothesize that due to the self-attention mechanism, each patch feature implicitly embeds/contains information from all other patches, represented in a different way and intensity, making these descriptors highly redundant. We quantify redundancy in such (BLIP2) features via PCA and cross-patch similarity: $90$% of variance is captured by $17/64$ principal components, and strong inter-token correlations are pervasive. Training on such overlapping information leads the policy to overfit spurious correlations, hurting OOD robustness. We present Stochastic-Patch-Selection (SPS), a simple yet effective approach for learning policies that are more robust, generalizable, and efficient. For every frame, SPS randomly masks a fraction of patch descriptors, not feeding them to the policy model, while preserving the spatial layout of the remaining patches. Thus, the policy is provided with different stochastic but complete views of the (same) scene: every random subset of patches acts like a different, yet still sensible, coherent projection of the world. The policy thus bases its decisions on features that are invariant to which specific tokens survive. Extensive experiments confirm that across all OOD scenarios, our method outperforms the state of the art (SOTA), achieving a $6.2$% average improvement and up to $20.4$% in closed-loop simulations, while being $2.4\times$ faster. We conduct ablations over masking rates and patch-feature reorganization, training and evaluating 9 systems, with 8 of them surpassing prior SOTA. Finally, we show that the same learned policy transfers to a physical, real-world car without any tuning.
Problem

Research questions and friction points this paper is trying to address.

autonomous driving
out-of-distribution generalization
feature redundancy
end-to-end learning
foundation models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Stochastic Patch Selection
foundation models
end-to-end autonomous driving
out-of-distribution generalization
feature redundancy
πŸ”Ž Similar Papers
No similar papers found.