StyleVLA: Driving Style-Aware Vision Language Action Model for Autonomous Driving

๐Ÿ“… 2026-03-10
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work addresses the challenge that existing vision-language action models struggle to generate kinematically feasible trajectories aligned with specific driving styles in autonomous driving. The authors propose a physics-aware vision-language action framework that integrates natural language instructions with multimodal inputs from both birdโ€™s-eye-view (BEV) and first-person-view (FPV) perspectives. By incorporating kinematic consistency constraints and a hybrid loss function featuring a continuous regression head, along with introducing the first large-scale instruction-tuning dataset annotated with multiple driving styles, the method enables controllable and physically plausible trajectory generation. Built upon the Qwen3-VL-4B architecture, the approach significantly outperforms closed-source models such as Gemini-3-Pro on a composite driving score encompassing success rate, physical feasibility, and style consistency, achieving scores of 0.55 and 0.51 for BEV and FPV, respectively.

Technology Category

Application Category

๐Ÿ“ Abstract
Vision Language Models (VLMs) bridge visual perception and linguistic reasoning. In Autonomous Driving (AD), this synergy has enabled Vision Language Action (VLA) models, which translate high-level multimodal understanding into driving behaviors, typically represented as future trajectories. However, existing VLA models mainly generate generic collision-free trajectories. Beyond collision avoidance, adapting to diverse driving styles (e.g., sporty, comfortable) is essential for personalized driving. Moreover, many methods treat trajectory generation as naive token prediction, which can produce kinematically infeasible actions. To address these limitations, we present StyleVLA, a physics-informed VLA framework for generating diverse and physically plausible driving behaviors. We introduce a hybrid loss that combines a kinematic consistency constraint with a continuous regression head to improve trajectory feasibility. To train StyleVLA, built on Qwen3-VL-4B, we construct a large-scale instruction dataset with over 1.2k scenarios, 76k Bird's Eye View (BEV) samples, and 42k First Person View (FPV) samples, with ground-truth trajectories for five driving styles and natural-language instructions. Experiments show that our 4B-parameter StyleVLA significantly outperforms proprietary models (e.g., Gemini-3-Pro) and state-of-the-art VLA models. Using a composite driving score measuring success rate, physical feasibility, and style adherence, StyleVLA achieves 0.55 on BEV and 0.51 on FPV, versus 0.32 and 0.35 for Gemini-3-Pro. These results show that a specialized, physics-informed, lightweight model can surpass closed-source models on domain-specific tasks.
Problem

Research questions and friction points this paper is trying to address.

Vision Language Action
Driving Style
Trajectory Generation
Autonomous Driving
Kinematic Feasibility
Innovation

Methods, ideas, or system contributions that make the work stand out.

Style-aware driving
Physics-informed VLA
Kinematic consistency
Multimodal instruction tuning
Trajectory feasibility
๐Ÿ”Ž Similar Papers
No similar papers found.
Yuan Gao
Yuan Gao
Research Associate, Technical University of Munich
Autonomous DrivingFoundation ModelsScenario GenerationMotion planningControl
D
Dengyuan Hua
Professorship of Autonomous Vehicle Systems, TUM School of Engineering and Design, Technical University of Munich, 85748 Garching, Germany; Munich Institute of Robotics and Machine Intelligence (MIRMI)
Mattia Piccinini
Mattia Piccinini
TUM Global Post-doc Researcher, Technical University of Munich
Autonomous VehiclesArtificial IntelligenceRoboticsTrajectory PlanningMotion Control
F
Finn Rasmus Schรคfer
Professorship of Autonomous Vehicle Systems, TUM School of Engineering and Design, Technical University of Munich, 85748 Garching, Germany; Munich Institute of Robotics and Machine Intelligence (MIRMI)
Korbinian Moller
Korbinian Moller
Research Associate at the Autonomous Vehicle Systems Lab, Technical University of Munich
Autonomous Driving
L
Lin Li
School of Mechanical and Aerospace Engineering, Nanyang Technological University
Johannes Betz
Johannes Betz
Professor, Autonomous Vehicle Systems, Technical University of Munich (TUM)
Autonomous SystemsMotion PlaningControlRobots