🤖 AI Summary
To address the insufficient performance and elevated accident risk of autonomous driving systems in long-tail, safety-critical scenarios, this paper proposes CoReVLA, a two-stage end-to-end framework. In Stage I, real-world takeover data is collected in real time via the CAVE simulation platform to construct high-quality, long-tail driving trajectories. In Stage II, open-domain driving question-answering data is integrated, and a vision-language-action (VLA) model is trained using direct preference optimization (DPO), circumventing reward hacking induced by handcrafted reward functions. This enables continuous autonomous refinement for rare and complex scenarios. Evaluated on the Bench2Drive benchmark, CoReVLA achieves a driving score of 72.18 and a task success rate of 50%, representing improvements of +7.96 points and +15 percentage points, respectively. These results demonstrate significantly enhanced perceptual robustness and decision reliability in long-tail scenarios.
📝 Abstract
Autonomous Driving (AD) systems have made notable progress, but their performance in long-tail, safety-critical scenarios remains limited. These rare cases contribute a disproportionate number of accidents. Vision-Language Action (VLA) models have strong reasoning abilities and offer a potential solution, but their effectiveness is limited by the lack of high-quality data and inefficient learning in such conditions. To address these challenges, we propose CoReVLA, a continual learning end-to-end autonomous driving framework that improves the performance in long-tail scenarios through a dual-stage process of data Collection and behavior Refinement. First, the model is jointly fine-tuned on a mixture of open-source driving QA datasets, allowing it to acquire a foundational understanding of driving scenarios. Next, CoReVLA is deployed within the Cave Automatic Virtual Environment (CAVE) simulation platform, where driver takeover data is collected from real-time interactions. Each takeover indicates a long-tail scenario that CoReVLA fails to handle reliably. Finally, the model is refined via Direct Preference Optimization (DPO), allowing it to learn directly from human preferences and thereby avoid reward hacking caused by manually designed rewards. Extensive open-loop and closed-loop experiments demonstrate that the proposed CoReVLA model can accurately perceive driving scenarios and make appropriate decisions. On the Bench2Drive benchmark, CoReVLA achieves a Driving Score (DS) of 72.18 and a Success Rate (SR) of 50%, outperforming state-of-the-art methods by 7.96 DS and 15% SR under long-tail, safety-critical scenarios. Furthermore, case studies demonstrate the model's ability to continually improve its performance in similar failure-prone scenarios by leveraging past takeover experiences. All codea and preprocessed datasets are available at: https://github.com/FanGShiYuu/CoReVLA