LACY: A Vision-Language Model-based Language-Action Cycle for Self-Improving Robotic Manipulation

๐Ÿ“… 2025-11-04
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Existing language-to-action (L2A) policy learning approaches suffer from insufficient contextual understanding, resulting in poor generalization and uninterpretable behaviors. This paper proposes the first vision-language-model-based bidirectional languageโ€“action mapping framework, unifying action generation, action verbalization, and semantic consistency verification into a self-improving closed loop. We innovatively introduce a low-confidence-triggered active data augmentation mechanism that enables autonomous improvement without human annotations. Through multi-task joint training, the framework significantly enhances semantic alignment between language instructions and robotic actions. Evaluated on grasp-and-place tasks in both simulation and real-world settings, our method achieves an average success rate improvement of 56.46% over prior approaches. Results demonstrate substantial gains in generalization capability, behavioral interpretability, and practical deployability.

Technology Category

Application Category

๐Ÿ“ Abstract
Learning generalizable policies for robotic manipulation increasingly relies on large-scale models that map language instructions to actions (L2A). However, this one-way paradigm often produces policies that execute tasks without deeper contextual understanding, limiting their ability to generalize or explain their behavior. We argue that the complementary skill of mapping actions back to language (A2L) is essential for developing more holistic grounding. An agent capable of both acting and explaining its actions can form richer internal representations and unlock new paradigms for self-supervised learning. We introduce LACY (Language-Action Cycle), a unified framework that learns such bidirectional mappings within a single vision-language model. LACY is jointly trained on three synergistic tasks: generating parameterized actions from language (L2A), explaining observed actions in language (A2L), and verifying semantic consistency between two language descriptions (L2C). This enables a self-improving cycle that autonomously generates and filters new training data through an active augmentation strategy targeting low-confidence cases, thereby improving the model without additional human labels. Experiments on pick-and-place tasks in both simulation and the real world show that LACY improves task success rates by 56.46% on average and yields more robust language-action grounding for robotic manipulation. Project page: https://vla2026.github.io/LACY/
Problem

Research questions and friction points this paper is trying to address.

Robotic manipulation policies lack contextual understanding and generalization capabilities
One-way language-to-action mapping limits explanation of behavior and self-improvement
Current models cannot autonomously generate training data without human supervision
Innovation

Methods, ideas, or system contributions that make the work stand out.

Bidirectional language-action mapping in single model
Self-improving cycle with autonomous data generation
Active augmentation targeting low-confidence cases
๐Ÿ”Ž Similar Papers
No similar papers found.
Y
Youngjin Hong
Department of Electrical and Computer Engineering, Univ. of Minnesota, Minneapolis, USA
Houjian Yu
Houjian Yu
Amazon, University of Minnesota
RoboticsComputer Vision
M
Mingen Li
Department of Electrical and Computer Engineering, Univ. of Minnesota, Minneapolis, USA
C
Changhyun Choi
Department of Electrical and Computer Engineering, Univ. of Minnesota, Minneapolis, USA