π€ AI Summary
This work addresses the vulnerability of existing vision-language-action models to minor linguistic perturbations, which arises from their overreliance on visual priors at the expense of instruction semantics, leading to modality collapse. To mitigate this, the authors propose the Residual Semantic Steering (RSS) framework, which explicitly models the causal influence of language on action by decoupling physical feasibility from semantic execution. RSS integrates Monte Carlo syntactic ensembles to approximate the true semantic posterior and leverages large language modelβbased distribution expansion, a dual-stream decoding architecture, and probabilistic modeling to effectively disentangle semantic intent from visual signals. Evaluated across multiple manipulation benchmarks, RSS demonstrates significantly enhanced robustness to linguistic perturbations, maintaining stable performance under adversarial conditions and achieving state-of-the-art results.
π Abstract
Vision-Language-Action (VLA) models have demonstrated impressive capabilities in generalized robotic control; however, they remain notoriously brittle to linguistic perturbations. We identify a critical ``modality collapse''phenomenon where strong visual priors overwhelm sparse linguistic signals, causing agents to overfit to specific instruction phrasings while ignoring the underlying semantic intent. To address this, we propose \textbf{Residual Semantic Steering (RSS)}, a probabilistic framework that disentangles physical affordance from semantic execution. RSS introduces two theoretical innovations: (1) \textbf{Monte Carlo Syntactic Integration}, which approximates the true semantic posterior via dense, LLM-driven distributional expansion, and (2) \textbf{Residual Affordance Steering}, a dual-stream decoding mechanism that explicitly isolates the causal influence of language by subtracting the visual affordance prior. Theoretical analysis suggests that RSS effectively maximizes the mutual information between action and intent while suppressing visual distractors. Empirical results across diverse manipulation benchmarks demonstrate that RSS achieves state-of-the-art robustness, maintaining performance even under adversarial linguistic perturbations.