Mind to Hand: Purposeful Robotic Control via Embodied Reasoning

📅 2025-12-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the decoupling between high-level “reasoning” and low-level “acting” in robotic systems—specifically, how to effectively ground the abstract reasoning capabilities of vision-language models (VLMs), such as task planning, spatial understanding, and trajectory forecasting, into executable physical actions. To bridge this gap, we propose a three-stage embodied pretraining framework: (1) multimodal continued pretraining, (2) cross-robot platform co-training, and (3) real-world bimanual robot trajectory-driven action prediction training. Crucially, we introduce a reasoning–action consistency reinforcement learning mechanism to enforce closed-loop alignment between semantic reasoning and motor control. Our approach significantly improves generalization in long-horizon tasks and under natural language instructions, outperforming strong baselines on challenging scenarios involving novel objects, unseen environments, and tasks requiring strategic or spatial reasoning.

Technology Category

Application Category

📝 Abstract
Humans act with context and intention, with reasoning playing a central role. While internet-scale data has enabled broad reasoning capabilities in AI systems, grounding these abilities in physical action remains a major challenge. We introduce Lumo-1, a generalist vision-language-action (VLA) model that unifies robot reasoning ("mind") with robot action ("hand"). Our approach builds upon the general multi-modal reasoning capabilities of pre-trained vision-language models (VLMs), progressively extending them to embodied reasoning and action prediction, and ultimately towards structured reasoning and reasoning-action alignment. This results in a three-stage pre-training pipeline: (1) Continued VLM pre-training on curated vision-language data to enhance embodied reasoning skills such as planning, spatial understanding, and trajectory prediction; (2) Co-training on cross-embodiment robot data alongside vision-language data; and (3) Action training with reasoning process on trajectories collected on Astribot S1, a bimanual mobile manipulator with human-like dexterity and agility. Finally, we integrate reinforcement learning to further refine reasoning-action consistency and close the loop between semantic inference and motor control. Extensive experiments demonstrate that Lumo-1 achieves significant performance improvements in embodied vision-language reasoning, a critical component for generalist robotic control. Real-world evaluations further show that Lumo-1 surpasses strong baselines across a wide range of challenging robotic tasks, with strong generalization to novel objects and environments, excelling particularly in long-horizon tasks and responding to human-natural instructions that require reasoning over strategy, concepts and space.
Problem

Research questions and friction points this paper is trying to address.

Grounding AI reasoning in physical robotic action
Unifying robot reasoning with robot action execution
Enhancing embodied reasoning for generalist robotic control
Innovation

Methods, ideas, or system contributions that make the work stand out.

Three-stage pre-training pipeline for robot reasoning and action
Integration of reinforcement learning for reasoning-action consistency
Generalist vision-language-action model for embodied robotic control