CARE: Multi-Task Pretraining for Latent Continuous Action Representation in Robot Control

πŸ“… 2026-01-30
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work proposes CARE, a novel framework for vision-language-action (VLA) modeling that eliminates the need for explicit action annotations during pretraining. By leveraging only weakly aligned video-text data, CARE performs multitask pretraining to learn continuous latent action representations that are semantically interpretable and robust to shortcut learning. During fine-tuning, effective control is achieved by training a lightweight action head with only a small amount of labeled data. Experimental results demonstrate that CARE significantly improves task success rates across multiple simulated environments while enhancing the semantic consistency and generalization capability of the learned action representations. These findings validate the framework’s scalability and control efficacy under a weakly supervised paradigm, addressing key limitations of existing VLA models that rely on costly action supervision.

Technology Category

Application Category

πŸ“ Abstract
Recent advances in Vision-Language-Action (VLA) models have shown promise for robot control, but their dependence on action supervision limits scalability and generalization. To address this challenge, we introduce CARE, a novel framework designed to train VLA models for robotic task execution. Unlike existing methods that depend on action annotations during pretraining, CARE eliminates the need for explicit action labels by leveraging only video-text pairs. These weakly aligned data sources enable the model to learn continuous latent action representations through a newly designed multi-task pretraining objective. During fine-tuning, a small set of labeled data is used to train the action head for control. Experimental results across various simulation tasks demonstrate CARE's superior success rate, semantic interpretability, and ability to avoid shortcut learning. These results underscore CARE's scalability, interpretability, and effectiveness in robotic control with weak supervision.
Problem

Research questions and friction points this paper is trying to address.

Vision-Language-Action models
action supervision
robot control
scalability
generalization
Innovation

Methods, ideas, or system contributions that make the work stand out.

weak supervision
latent action representation
multi-task pretraining
vision-language-action models
robot control
J
Jiaqi Shi
Ping An Technology (Shenzhen) Co., Ltd., Shenzhen, China; University of Science and Technology of China, Hefei, China
Xulong Zhang
Xulong Zhang
Ping An Technology (Shenzhen) Co., Ltd.
Federated Large ModelsTrusted ComputingGraph Computing
X
Xiaoyang Qu
Ping An Technology (Shenzhen) Co., Ltd., Shenzhen, China
Jianzong Wang
Jianzong Wang
Postdoctoral Researcher of Department of Electrical and Computer Engineering, University of Florida
Big DataStorage SystemCloud Computing