LoHoVLA: A Unified Vision-Language-Action Model for Long-Horizon Embodied Tasks

📅 2025-05-31
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Real-world long-horizon embodied tasks require tight coordination between high-level planning (e.g., goal decomposition) and low-level control (e.g., precise actuation), yet existing Vision-Language-Action (VLA) models exhibit weak planning capabilities, and hierarchical architectures suffer from error propagation leading to task failure. To address this, we propose UniVLA—a unified VLA framework that jointly decodes subtask language descriptions and robot action tokens in a single sequence. We further introduce a hierarchical closed-loop feedback mechanism to dynamically correct discrepancies between high- and low-level decisions. Additionally, we construct LoHoSet, the first benchmark dedicated to long-horizon embodied tasks. Trained and evaluated on the Ravens platform using large-scale vision-language models, UniVLA achieves significant improvements over state-of-the-art hierarchical methods and standard VLA baselines across 20 long-horizon tasks—demonstrating superior subtask completion rates and final success rates.

Technology Category

Application Category

📝 Abstract
Real-world embodied agents face long-horizon tasks, characterized by high-level goals demanding multi-step solutions beyond single actions. Successfully navigating these requires both high-level task planning (i.e., decomposing goals into sub-tasks) and low-level motion control (i.e., generating precise robot actions). While existing vision language action (VLA) models and hierarchical architectures offer potential in embodied tasks, the former often falter in planning, and the latter can suffer from coordination issues, both hampering performance. We introduce a new unified VLA framework for long-horizon tasks, dubbed LoHoVLA, to overcome these limitations. LoHoVLA leverages a large pretrained vision language model (VLM) as the backbone to jointly generate language and action tokens for sub-task generation and robot action prediction, respectively. This shared representation promotes better generalization across tasks. Additionally, LoHoVLA embraces a hierarchical closed-loop control mechanism to mitigate errors originating from both high-level planning and low-level control. To train LoHoVLA, we introduce LoHoSet, a dataset built on the Ravens simulator, containing 20 long-horizon tasks, each with 1,000 expert demonstrations composed of visual observations, linguistic goals, sub-tasks, and robot actions. Experimental results show that LoHoVLA significantly surpasses both hierarchical and standard VLA approaches on long-horizon embodied tasks in the Ravens simulator. These findings underscore the promise of unified architectures for advancing generalizable embodied intelligence.
Problem

Research questions and friction points this paper is trying to address.

Addresses long-horizon embodied tasks requiring multi-step solutions
Overcomes planning and coordination issues in existing VLA models
Integrates high-level task decomposition with low-level motion control
Innovation

Methods, ideas, or system contributions that make the work stand out.

Unified VLA framework for long-horizon tasks
Shared VLM backbone for language and action
Hierarchical closed-loop control mechanism
🔎 Similar Papers
No similar papers found.
Y
Yi Yang
Fudan University
J
Jiaxuan Sun
ShanghaiTech University
Siqi Kou
Siqi Kou
Shanghai Jiaotong university
Machine Learning
Y
Yihan Wang
Fudan University
Z
Zhijie Deng
Shanghai Jiao Tong University