FastDriveVLA: Efficient End-to-End Driving via Plug-and-Play Reconstruction-based Token Pruning

📅 2025-07-31
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address excessive computational overhead in vision-language-action (VLA) models for end-to-end autonomous driving—caused by redundant visual tokens—this paper proposes FastDriveVLA, a plug-and-play, reconstruction-based visual token pruning framework. Methodologically, it introduces the ReconPruner module, which jointly leverages MAE-style pixel reconstruction and adversarial foreground-background reconstruction to precisely preserve driving-relevant foreground tokens without fine-tuning. To support foreground-aware training, we construct nuScenes-FG, the first large-scale foreground-annotated dataset derived from nuScenes. FastDriveVLA is encoder-agnostic and seamlessly integrates with diverse vision encoders. Evaluated on the nuScenes closed-loop planning benchmark, it achieves state-of-the-art performance while reducing token count by over 60% on average—demonstrating unprecedented efficiency–accuracy trade-off balance in driving decision-making.

Technology Category

Application Category

📝 Abstract
Vision-Language-Action (VLA) models have demonstrated significant potential in complex scene understanding and action reasoning, leading to their increasing adoption in end-to-end autonomous driving systems. However, the long visual tokens of VLA models greatly increase computational costs. Current visual token pruning methods in Vision-Language Models (VLM) rely on either visual token similarity or visual-text attention, but both have shown poor performance in autonomous driving scenarios. Given that human drivers concentrate on relevant foreground areas while driving, we assert that retaining visual tokens containing this foreground information is essential for effective decision-making. Inspired by this, we propose FastDriveVLA, a novel reconstruction-based vision token pruning framework designed specifically for autonomous driving. FastDriveVLA includes a plug-and-play visual token pruner called ReconPruner, which prioritizes foreground information through MAE-style pixel reconstruction. A novel adversarial foreground-background reconstruction strategy is designed to train ReconPruner for the visual encoder of VLA models. Once trained, ReconPruner can be seamlessly applied to different VLA models with the same visual encoder without retraining. To train ReconPruner, we also introduce a large-scale dataset called nuScenes-FG, consisting of 241K image-mask pairs with annotated foreground regions. Our approach achieves state-of-the-art results on the nuScenes closed-loop planning benchmark across different pruning ratios.
Problem

Research questions and friction points this paper is trying to address.

Reduces computational costs in VLA models for autonomous driving
Improves token pruning by focusing on foreground information
Introduces a plug-and-play pruner without needing retraining
Innovation

Methods, ideas, or system contributions that make the work stand out.

Reconstruction-based token pruning for VLA models
Adversarial foreground-background reconstruction strategy
Plug-and-play ReconPruner without retraining
🔎 Similar Papers
No similar papers found.
J
Jiajun Cao
State Key Laboratory of Multimedia Information Processing, School of Computer Science, Peking University
Qizhe Zhang
Qizhe Zhang
School of Computer Science, Peking University
Vision Language ModelComputer VisionMachine Learning
P
Peidong Jia
State Key Laboratory of Multimedia Information Processing, School of Computer Science, Peking University
X
Xuhui Zhao
State Key Laboratory of Multimedia Information Processing, School of Computer Science, Peking University; XPeng Motors
B
Bo Lan
State Key Laboratory of Multimedia Information Processing, School of Computer Science, Peking University; XPeng Motors
X
Xiaoan Zhang
State Key Laboratory of Multimedia Information Processing, School of Computer Science, Peking University; XPeng Motors
Xiaobao Wei
Xiaobao Wei
Institute of Software, Chinese Academy of Sciences
3D Vision
S
Sixiang Chen
State Key Laboratory of Multimedia Information Processing, School of Computer Science, Peking University
Z
Zhuo Li
XPeng Motors
Y
Yang Wang
XPeng Motors
L
Liyun Li
XPeng Motors
X
Xianming Liu
XPeng Motors
M
Ming Lu
State Key Laboratory of Multimedia Information Processing, School of Computer Science, Peking University
Shanghang Zhang
Shanghang Zhang
Peking University
Embodied AIFoundation Models