GROVE: A Generalized Reward for Learning Open-Vocabulary Physical Skill

📅 2025-04-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses scalability and out-of-distribution generalization bottlenecks in open-vocabulary physical skill learning for agents in simulation, without relying on hand-crafted rewards or task-specific demonstrations. The proposed self-optimizing framework features: (1) an LLM–VLM closed-loop, where the LLM generates physically grounded constraints and the VLM evaluates motion semantics; and (2) a lightweight Pose2CLIP mapper that bridges the domain gap between simulated pose representations and visual-semantic embeddings. The method integrates large language models, vision-language models, reinforcement learning, and cross-modal feature alignment. Across diverse morphologies and learning paradigms, it achieves a 22.2% improvement in motion naturalness, a 25.7% increase in task success rate, and an 8.4× speedup in training efficiency—while significantly enhancing zero-shot task transfer and out-of-distribution generalization.

Technology Category

Application Category

📝 Abstract
Learning open-vocabulary physical skills for simulated agents presents a significant challenge in artificial intelligence. Current reinforcement learning approaches face critical limitations: manually designed rewards lack scalability across diverse tasks, while demonstration-based methods struggle to generalize beyond their training distribution. We introduce GROVE, a generalized reward framework that enables open-vocabulary physical skill learning without manual engineering or task-specific demonstrations. Our key insight is that Large Language Models(LLMs) and Vision Language Models(VLMs) provide complementary guidance -- LLMs generate precise physical constraints capturing task requirements, while VLMs evaluate motion semantics and naturalness. Through an iterative design process, VLM-based feedback continuously refines LLM-generated constraints, creating a self-improving reward system. To bridge the domain gap between simulation and natural images, we develop Pose2CLIP, a lightweight mapper that efficiently projects agent poses directly into semantic feature space without computationally expensive rendering. Extensive experiments across diverse embodiments and learning paradigms demonstrate GROVE's effectiveness, achieving 22.2% higher motion naturalness and 25.7% better task completion scores while training 8.4x faster than previous methods. These results establish a new foundation for scalable physical skill acquisition in simulated environments.
Problem

Research questions and friction points this paper is trying to address.

Learning open-vocabulary physical skills for simulated agents
Overcoming limitations of manual rewards and demonstration-based methods
Bridging domain gap between simulation and natural images
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLMs generate precise physical constraints
VLMs evaluate motion semantics and naturalness
Pose2CLIP projects poses into semantic space
🔎 Similar Papers
No similar papers found.
Jieming Cui
Jieming Cui
Peking University
Tengyu Liu
Tengyu Liu
Beijing Institute for General Artificial Intelligence
computer visionhuman object interactionhuman motion generationgrasping
Z
Ziyu Meng
State Key Laboratory of General Artificial Intelligence, BIGAI
Jiale Yu
Jiale Yu
中国科学技术大学
R
Ran Song
School of Control Science and Engineering, Shandong University
W
Wei Zhang
School of Control Science and Engineering, Shandong University
Yixin Zhu
Yixin Zhu
Assistant Professor, Peking University
Computer VisionVisual ReasoningHuman-Robot Teaming
S
Siyuan Huang
State Key Laboratory of General Artificial Intelligence, BIGAI