🤖 AI Summary
Existing end-to-end autonomous driving methods rely on dense Bird’s Eye View (BEV) grid representations, leading to computational redundancy and weak planning awareness. To address this, we propose a Proposal-Centric paradigm that treats candidate trajectories as the core driver: we design the ProFormer BEV encoder, which employs proposal-anchored attention and multi-view fusion for iterative feature refinement; and we introduce lightweight mapping and prediction auxiliary tasks to jointly optimize trajectory generation quality. Crucially, our framework abandons conventional dense BEV representations. Evaluated on NAVSIM and CARLA Bench2Drive, it achieves state-of-the-art performance—significantly improving planning accuracy while accelerating inference by 42% and reducing model parameters by 36%.
📝 Abstract
End-to-end (E2E) autonomous driving systems offer a promising alternative to traditional modular pipelines by reducing information loss and error accumulation, with significant potential to enhance both mobility and safety. However, most existing E2E approaches directly generate plans based on dense bird's-eye view (BEV) grid features, leading to inefficiency and limited planning awareness. To address these limitations, we propose iterative Proposal-centric autonomous driving (iPad), a novel framework that places proposals - a set of candidate future plans - at the center of feature extraction and auxiliary tasks. Central to iPad is ProFormer, a BEV encoder that iteratively refines proposals and their associated features through proposal-anchored attention, effectively fusing multi-view image data. Additionally, we introduce two lightweight, proposal-centric auxiliary tasks - mapping and prediction - that improve planning quality with minimal computational overhead. Extensive experiments on the NAVSIM and CARLA Bench2Drive benchmarks demonstrate that iPad achieves state-of-the-art performance while being significantly more efficient than prior leading methods.