DriveFine: Refining-Augmented Masked Diffusion VLA for Precise and Robust Driving

📅 2026-02-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing vision-language-action (VLA) approaches for autonomous driving planning face limitations in modality alignment, training efficiency, generalization capability, and decoding invertibility. This work proposes DriveFine, a masked diffusion VLA model that introduces a novel plug-and-play block-MoE architecture, fully decoupling generative experts from refinement experts. During inference, experts are explicitly selected, while during training, gradient blocking preserves the generality of pretrained weights. The framework further integrates hybrid reinforcement learning to enhance training stability and exploration efficiency. Experimental results demonstrate that DriveFine significantly outperforms current methods on the NAVSIM v1/v2 and Navhard benchmarks, achieving notable advances in both driving accuracy and robustness.

Technology Category

Application Category

📝 Abstract
Vision-Language-Action (VLA) models for autonomous driving increasingly adopt generative planners trained with imitation learning followed by reinforcement learning. Diffusion-based planners suffer from modality alignment difficulties, low training efficiency, and limited generalization. Token-based planners are plagued by cumulative causal errors and irreversible decoding. In summary, the two dominant paradigms exhibit complementary strengths and weaknesses. In this paper, we propose DriveFine, a masked diffusion VLA model that combines flexible decoding with self-correction capabilities. In particular, we design a novel plug-and-play block-MoE, which seamlessly injects a refinement expert on top of the generation expert. By enabling explicit expert selection during inference and gradient blocking during training, the two experts are fully decoupled, preserving the foundational capabilities and generic patterns of the pretrained weights, which highlights the flexibility and extensibility of the block-MoE design. Furthermore, we design a hybrid reinforcement learning strategy that encourages effective exploration of refinement expert while maintaining training stability. Extensive experiments on NAVSIM v1, v2, and Navhard benchmarks demonstrate that DriveFine exhibits strong efficacy and robustness. The code will be released at https://github.com/MSunDYY/DriveFine.
Problem

Research questions and friction points this paper is trying to address.

Vision-Language-Action
autonomous driving
diffusion models
token-based planners
modality alignment
Innovation

Methods, ideas, or system contributions that make the work stand out.

masked diffusion
block-MoE
refinement expert
hybrid reinforcement learning
Vision-Language-Action (VLA)
🔎 Similar Papers
No similar papers found.