Causal Scene Narration with Runtime Safety Supervision for Vision-Language-Action Driving

📅 2026-04-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of effectively integrating heterogeneous textual inputs—such as navigation instructions, hazard warnings, and traffic state descriptions—into vision-language-action (VLA) driving models, a limitation that often leads to inaccurate recognition of environmental constraints. To overcome this, the authors propose Causal Scene Narration (CSN), which, during inference, restructures textual input at zero GPU overhead to achieve structured separation and alignment between driving intent and environmental constraints. The approach further integrates Simplex-based runtime safety supervision with Plackett-Luce Direct Preference Optimization (DPO) to embed intent-aware safety mechanisms deeply into the VLA framework for the first time. Evaluated in closed-loop CARLA simulations across multiple cities, the method improves driving scores by 31.1% over the baseline LMDrive, with CSN alone accounting for a 39.1% performance gain, while maintaining robustness under perceptual noise.
📝 Abstract
Vision-Language-Action (VLA) models for autonomous driving must integrate diverse textual inputs, including navigation commands, hazard warnings, and traffic state descriptions, yet current systems often present these as disconnected fragments, forcing the model to discover on its own which environmental constraints are relevant to the current maneuver. We introduce Causal Scene Narration (CSN), which restructures VLA text inputs through intent-constraint alignment, quantitative grounding, and structured separation, at inference time with zero GPU cost. We complement CSN with Simplex-based runtime safety supervision and training-time alignment via Plackett-Luce DPO with negative log-likelihood (NLL) regularization. A multi-town closed-loop CARLA evaluation shows that CSN improves Driving Score by +31.1% on original LMDrive and +24.5% on the preference-aligned variant. A controlled ablation reveals that causal structure accounts for 39.1% of this gain, with the remainder attributable to information content alone. A perception noise ablation confirms that CSN's benefit is robust to realistic sensing errors. Semantic safety supervision improves Infraction Score, while reactive Time-To-Collision monitoring degrades performance, demonstrating that intent-aware monitoring is needed for VLA systems.
Problem

Research questions and friction points this paper is trying to address.

Vision-Language-Action
autonomous driving
text integration
environmental constraints
causal reasoning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Causal Scene Narration
Vision-Language-Action
Runtime Safety Supervision
Intent-Constraint Alignment
Plackett-Luce DPO
🔎 Similar Papers
No similar papers found.