๐ค AI Summary
Existing language-to-action (L2A) policy learning approaches suffer from insufficient contextual understanding, resulting in poor generalization and uninterpretable behaviors. This paper proposes the first vision-language-model-based bidirectional languageโaction mapping framework, unifying action generation, action verbalization, and semantic consistency verification into a self-improving closed loop. We innovatively introduce a low-confidence-triggered active data augmentation mechanism that enables autonomous improvement without human annotations. Through multi-task joint training, the framework significantly enhances semantic alignment between language instructions and robotic actions. Evaluated on grasp-and-place tasks in both simulation and real-world settings, our method achieves an average success rate improvement of 56.46% over prior approaches. Results demonstrate substantial gains in generalization capability, behavioral interpretability, and practical deployability.
๐ Abstract
Learning generalizable policies for robotic manipulation increasingly relies on large-scale models that map language instructions to actions (L2A). However, this one-way paradigm often produces policies that execute tasks without deeper contextual understanding, limiting their ability to generalize or explain their behavior. We argue that the complementary skill of mapping actions back to language (A2L) is essential for developing more holistic grounding. An agent capable of both acting and explaining its actions can form richer internal representations and unlock new paradigms for self-supervised learning. We introduce LACY (Language-Action Cycle), a unified framework that learns such bidirectional mappings within a single vision-language model. LACY is jointly trained on three synergistic tasks: generating parameterized actions from language (L2A), explaining observed actions in language (A2L), and verifying semantic consistency between two language descriptions (L2C). This enables a self-improving cycle that autonomously generates and filters new training data through an active augmentation strategy targeting low-confidence cases, thereby improving the model without additional human labels. Experiments on pick-and-place tasks in both simulation and the real world show that LACY improves task success rates by 56.46% on average and yields more robust language-action grounding for robotic manipulation. Project page: https://vla2026.github.io/LACY/