PVI: Plug-in Visual Injection for Vision-Language-Action Models

📅 2026-03-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing vision-language-action models struggle with multi-stage manipulation tasks due to their reliance on semantically abstracted pretrained vision models, which often neglect geometric details and lack explicit temporal modeling. To address this, this work proposes a lightweight, encoder-agnostic, plug-and-play module that injects video-level visual representations—such as those from V-JEPA2 or DINOv2—into flow-matching action experts via a zero-initialized residual path. This enables single-stage fine-tuning without modifying the backbone architecture. The approach provides the first empirical validation that video-level features significantly outperform static image features in long-horizon manipulation tasks. Consistent performance gains are demonstrated on both simulated and real-world dual-arm cloth-folding benchmarks, with particularly pronounced improvements in multi-stage scenarios requiring state tracking.

Technology Category

Application Category

📝 Abstract
VLA architectures that pair a pretrained VLM with a flow-matching action expert have emerged as a strong paradigm for language-conditioned manipulation. Yet the VLM, optimized for semantic abstraction and typically conditioned on static visual observations, tends to attenuate fine-grained geometric cues and often lacks explicit temporal evidence for the action expert. Prior work mitigates this by injecting auxiliary visual features, but existing approaches either focus on static spatial representations or require substantial architectural modifications to accommodate temporal inputs, leaving temporal information underexplored. We propose Plug-in Visual Injection (PVI), a lightweight, encoder-agnostic module that attaches to a pretrained action expert and injects auxiliary visual representations via zero-initialized residual pathways, preserving pretrained behavior with only single-stage fine-tuning. Using PVI, we obtain consistent gains over the base policy and a range of competitive alternative injection strategies, and our controlled study shows that temporal video features (V-JEPA2) outperform strong static image features (DINOv2), with the largest gains on multi-phase tasks requiring state tracking and coordination. Real-robot experiments on long-horizon bimanual cloth folding further demonstrate the practicality of PVI beyond simulation.
Problem

Research questions and friction points this paper is trying to address.

Vision-Language-Action Models
Temporal Information
Visual Injection
Geometric Cues
Static Visual Observations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Plug-in Visual Injection
Vision-Language-Action Models
Temporal Video Features
Zero-initialized Residual Pathways
Flow-matching Action Expert
Z
Zezhou Zhang
Lionrock AI Lab, China Merchants Group, Hong Kong, China
S
Songxin Zhang
Lionrock AI Lab, China Merchants Group, Hong Kong, China
Xiao Xiong
Xiao Xiong
Nankai University
Failure Diagnosis
Junjie Zhang
Junjie Zhang
Renmin University of China
LLMsAgentRSNLP
Z
Zejian Xie
Lionrock AI Lab, China Merchants Group, Hong Kong, China
J
Jingyi Xi
Lionrock AI Lab, China Merchants Group, Hong Kong, China
Z
Zunyao Mao
Lionrock AI Lab, China Merchants Group, Hong Kong, China
Z
Zan Mao
Lionrock AI Lab, China Merchants Group, Hong Kong, China
Z
Zhixin Mai
Lionrock AI Lab, China Merchants Group, Hong Kong, China
Z
Zhuoyang Song
Lionrock AI Lab, China Merchants Group, Hong Kong, China
J
Jiaxing Zhang
Lionrock AI Lab, China Merchants Group, Hong Kong, China