$\Delta$VLA: Prior-Guided Vision-Language-Action Models via World Knowledge Variation

πŸ“… 2026-03-09
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the limitations of existing vision-language-action (VLA) models, which typically predict future states directly without explicitly reasoning about dynamic changes in world knowledge, thereby constraining action generation efficiency and generalization. To overcome this, we propose the Ξ”VLA framework, which guides action decisions by modeling deviations from prior world knowledge rather than predicting absolute future states. Our approach introduces a Prior-Guided World Knowledge Extractor (PWKE) to capture explicit world priors, employs Latent World Variation Quantization (LWVQ) to discretize representations of state changes, and incorporates a Conditional Variation Attention (CV-Atten) mechanism for context-aware reasoning. Experiments demonstrate that Ξ”VLA achieves state-of-the-art performance on both simulated and real-world robotic tasks while significantly improving computational efficiency.

Technology Category

Application Category

πŸ“ Abstract
Recent vision-language-action (VLA) models have significantly advanced robotic manipulation by unifying perception, reasoning, and control. To achieve such integration, recent studies adopt a predictive paradigm that models future visual states or world knowledge to guide action generation. However, these models emphasize forecasting outcomes rather than reasoning about the underlying process of change, which is essential for determining how to act. To address this, we propose $\Delta$VLA, a prior-guided framework that models world-knowledge variations relative to an explicit current-world knowledge prior for action generation, rather than regressing absolute future world states. Specifically, 1) to construct the current world knowledge prior, we propose the Prior-Guided WorldKnowledge Extractor (PWKE). It extracts manipulable regions, spatial relations, and semantic cues from the visual input, guided by auxiliary heads and prior pseudo labels, thus reducing redundancy. 2) Building upon this, to represent how world knowledge evolves under actions, we introduce the Latent World Variation Quantization (LWVQ). It learns a discrete latent space via a VQ-VAE objective to encode world knowledge variations, shifting prediction from full modalities to compact latent. 3)Moreover, to mitigate interference during variation modeling, we design the Conditional Variation Attention (CV-Atten), whichpromotes disentangled learning and preserves the independence of knowledge representations. Extensive experiments on both simulated benchmarks and real-world robotic tasks demonstrate $\Delta$VLA achieves state-of-the-art performance while improving efficiency. Code and real-world execution videos are available at https://github.com/JiuTian-VL/DeltaVLA.
Problem

Research questions and friction points this paper is trying to address.

vision-language-action
world knowledge variation
action generation
robotic manipulation
predictive modeling
Innovation

Methods, ideas, or system contributions that make the work stand out.

Vision-Language-Action
World Knowledge Variation
Discrete Latent Space
Prior-Guided Learning
Robotic Manipulation
πŸ”Ž Similar Papers
No similar papers found.
Y
Yijie Zhu
Harbin Institute of Technology, Shenzhen, Shenzhen 518055, China, and Great Bay University, Dongguan 523000, China
Jie He
Jie He
Georgia Institute of Technology
Climate Science
Rui Shao
Rui Shao
Professor, Harbin Institute of Technology (Shenzhen)
Computer VisionMultimodal LLMEmbodied AI
K
Kaishen Yuan
Information Hub, The Hong Kong University of Science and Technology (Guangzhou), Guangzhou 511400, China
Tao Tan
Tao Tan
FCA MPU
Medical Imaging AI
X
Xiaochen Yuan
Macao Polytechnic University, Macao 999078, China
Zitong Yu
Zitong Yu
U.S. Food and Drug Administration
Medical imagingDeep learningMachine learningImage reconstruction