π€ AI Summary
This work proposes CARE, a novel framework for vision-language-action (VLA) modeling that eliminates the need for explicit action annotations during pretraining. By leveraging only weakly aligned video-text data, CARE performs multitask pretraining to learn continuous latent action representations that are semantically interpretable and robust to shortcut learning. During fine-tuning, effective control is achieved by training a lightweight action head with only a small amount of labeled data. Experimental results demonstrate that CARE significantly improves task success rates across multiple simulated environments while enhancing the semantic consistency and generalization capability of the learned action representations. These findings validate the frameworkβs scalability and control efficacy under a weakly supervised paradigm, addressing key limitations of existing VLA models that rely on costly action supervision.
π Abstract
Recent advances in Vision-Language-Action (VLA) models have shown promise for robot control, but their dependence on action supervision limits scalability and generalization. To address this challenge, we introduce CARE, a novel framework designed to train VLA models for robotic task execution. Unlike existing methods that depend on action annotations during pretraining, CARE eliminates the need for explicit action labels by leveraging only video-text pairs. These weakly aligned data sources enable the model to learn continuous latent action representations through a newly designed multi-task pretraining objective. During fine-tuning, a small set of labeled data is used to train the action head for control. Experimental results across various simulation tasks demonstrate CARE's superior success rate, semantic interpretability, and ability to avoid shortcut learning. These results underscore CARE's scalability, interpretability, and effectiveness in robotic control with weak supervision.