Enhancing Generalization in Vision-Language-Action Models by Preserving Pretrained Representations

📅 2025-09-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the degradation of pretrained visual and linguistic representations and limited generalization when directly fine-tuning vision-language-action (VLA) models on robot data, this paper proposes a lightweight adaptation framework that preserves visual and language priors. Methodologically, it introduces: (1) a dual-encoder architecture—freezing the main vision encoder to safeguard generic representations while adding a lightweight, trainable branch for task-specific signals; (2) a string-based action tokenizer, the first to map continuous robot actions into discrete character sequences aligned with the pretraining domain of large language models (LLMs); and (3) a multimodal joint training strategy integrating robotic demonstrations with large-scale vision-language datasets. Evaluated on both simulation and real-world robotic platforms, the approach significantly improves robustness to visual disturbances, novel instructions, and unseen environments, achieving consistently higher task success rates than state-of-the-art baselines.

Technology Category

Application Category

📝 Abstract
Vision-language-action (VLA) models finetuned from vision-language models (VLMs) hold the promise of leveraging rich pretrained representations to build generalist robots across diverse tasks and environments. However, direct fine-tuning on robot data often disrupts these representations and limits generalization. We present a framework that better preserves pretrained features while adapting them for robot manipulation. Our approach introduces three components: (i) a dual-encoder design with one frozen vision encoder to retain pretrained features and another trainable for task adaptation, (ii) a string-based action tokenizer that casts continuous actions into character sequences aligned with the model's pretraining domain, and (iii) a co-training strategy that combines robot demonstrations with vision-language datasets emphasizing spatial reasoning and affordances. Evaluations in simulation and on real robots show that our method improves robustness to visual perturbations, generalization to novel instructions and environments, and overall task success compared to baselines.
Problem

Research questions and friction points this paper is trying to address.

Preserving pretrained representations in VLMs during robot fine-tuning
Enhancing generalization across diverse tasks and environments
Improving robustness to visual perturbations and novel instructions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dual-encoder design preserves pretrained features
String-based action tokenizer for continuous actions
Co-training strategy combines robot and vision-language data
🔎 Similar Papers
No similar papers found.
S
Shresth Grover
UC San Diego
A
Akshay Gopalkrishnan
UC San Diego
B
Bo Ai
UC San Diego
H
Henrik I. Christensen
UC San Diego
H
Hao Su
UC San Diego, Hillbot
Xuanlin Li
Xuanlin Li
Unknown affiliation
Computer VisionRoboticsEmbodied AINatural Language ProcessingMachine Learning