VL-SAFE: Vision-Language Guided Safety-Aware Reinforcement Learning with World Models for Autonomous Driving

📅 2025-05-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address critical challenges in autonomous driving reinforcement learning—including low sample efficiency, poor generalization, and the absence of safety-aware semantic modeling—this paper proposes a Vision-Language Model (VLM)-guided implicit world model framework. It is the first to integrate a VLM as an interpretable, context-aware safety semantic prior into both world model construction and policy optimization. The method enables fully offline safe policy learning by leveraging VLM-driven safety scoring for trajectory annotation, imagination-based safe trajectory generation, and Actor-Critic optimization—thereby eliminating online trial-and-error risks. Experimental results demonstrate a 3.2× improvement in sample efficiency, a 67% reduction in collision rate, significantly enhanced cross-scenario generalization, and driving behaviors better aligned with human safety intuition.

Technology Category

Application Category

📝 Abstract
Reinforcement learning (RL)-based autonomous driving policy learning faces critical limitations such as low sample efficiency and poor generalization; its reliance on online interactions and trial-and-error learning is especially unacceptable in safety-critical scenarios. Existing methods including safe RL often fail to capture the true semantic meaning of"safety"in complex driving contexts, leading to either overly conservative driving behavior or constraint violations. To address these challenges, we propose VL-SAFE, a world model-based safe RL framework with Vision-Language model (VLM)-as-safety-guidance paradigm, designed for offline safe policy learning. Specifically, we construct offline datasets containing data collected by expert agents and labeled with safety scores derived from VLMs. A world model is trained to generate imagined rollouts together with safety estimations, allowing the agent to perform safe planning without interacting with the real environment. Based on these imagined trajectories and safety evaluations, actor-critic learning is conducted under VLM-based safety guidance to optimize the driving policy more safely and efficiently. Extensive evaluations demonstrate that VL-SAFE achieves superior sample efficiency, generalization, safety, and overall performance compared to existing baselines. To the best of our knowledge, this is the first work that introduces a VLM-guided world model-based approach for safe autonomous driving. The demo video and code can be accessed at: https://ys-qu.github.io/vlsafe-website/
Problem

Research questions and friction points this paper is trying to address.

Improving sample efficiency and generalization in RL-based autonomous driving
Enhancing semantic safety understanding in complex driving scenarios
Enabling offline safe policy learning without real-world interactions
Innovation

Methods, ideas, or system contributions that make the work stand out.

VL-SAFE uses Vision-Language models for safety guidance
World model generates safe imagined rollouts offline
Actor-critic learning optimizes policy under VLM safety
🔎 Similar Papers
No similar papers found.