🤖 AI Summary
To address excessive computational overhead in vision-language-action (VLA) models for end-to-end autonomous driving—caused by redundant visual tokens—this paper proposes FastDriveVLA, a plug-and-play, reconstruction-based visual token pruning framework. Methodologically, it introduces the ReconPruner module, which jointly leverages MAE-style pixel reconstruction and adversarial foreground-background reconstruction to precisely preserve driving-relevant foreground tokens without fine-tuning. To support foreground-aware training, we construct nuScenes-FG, the first large-scale foreground-annotated dataset derived from nuScenes. FastDriveVLA is encoder-agnostic and seamlessly integrates with diverse vision encoders. Evaluated on the nuScenes closed-loop planning benchmark, it achieves state-of-the-art performance while reducing token count by over 60% on average—demonstrating unprecedented efficiency–accuracy trade-off balance in driving decision-making.
📝 Abstract
Vision-Language-Action (VLA) models have demonstrated significant potential in complex scene understanding and action reasoning, leading to their increasing adoption in end-to-end autonomous driving systems. However, the long visual tokens of VLA models greatly increase computational costs. Current visual token pruning methods in Vision-Language Models (VLM) rely on either visual token similarity or visual-text attention, but both have shown poor performance in autonomous driving scenarios. Given that human drivers concentrate on relevant foreground areas while driving, we assert that retaining visual tokens containing this foreground information is essential for effective decision-making. Inspired by this, we propose FastDriveVLA, a novel reconstruction-based vision token pruning framework designed specifically for autonomous driving. FastDriveVLA includes a plug-and-play visual token pruner called ReconPruner, which prioritizes foreground information through MAE-style pixel reconstruction. A novel adversarial foreground-background reconstruction strategy is designed to train ReconPruner for the visual encoder of VLA models. Once trained, ReconPruner can be seamlessly applied to different VLA models with the same visual encoder without retraining. To train ReconPruner, we also introduce a large-scale dataset called nuScenes-FG, consisting of 241K image-mask pairs with annotated foreground regions. Our approach achieves state-of-the-art results on the nuScenes closed-loop planning benchmark across different pruning ratios.