RL from Physical Feedback: Aligning Large Motion Models with Humanoid Control

📅 2025-06-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Text-to-motion generation for humanoid robots often yields physically infeasible motions and suffers from poor deployability in real-world settings. Method: This paper proposes the Physics-Feedback Reinforcement Learning framework (RLPF), the first to jointly model dynamical feasibility assessment and semantic alignment verification. RLPF integrates PyBullet/MuJoCo physics simulation, text-conditioned diffusion or autoregressive motion generation models, and a motion-tracking policy network, optimizing generated motions via closed-loop physical feedback to enhance both dynamical plausibility and instruction fidelity. Results: Evaluated on Unitree H1 and Tesla Optimus simulation platforms, RLPF-generated motions achieve high physical feasibility and semantic fidelity, improving task success rate by 62% over baseline methods. The framework significantly bridges the sim-to-real gap in text-driven robot motion generation.

Technology Category

Application Category

📝 Abstract
This paper focuses on a critical challenge in robotics: translating text-driven human motions into executable actions for humanoid robots, enabling efficient and cost-effective learning of new behaviors. While existing text-to-motion generation methods achieve semantic alignment between language and motion, they often produce kinematically or physically infeasible motions unsuitable for real-world deployment. To bridge this sim-to-real gap, we propose Reinforcement Learning from Physical Feedback (RLPF), a novel framework that integrates physics-aware motion evaluation with text-conditioned motion generation. RLPF employs a motion tracking policy to assess feasibility in a physics simulator, generating rewards for fine-tuning the motion generator. Furthermore, RLPF introduces an alignment verification module to preserve semantic fidelity to text instructions. This joint optimization ensures both physical plausibility and instruction alignment. Extensive experiments show that RLPF greatly outperforms baseline methods in generating physically feasible motions while maintaining semantic correspondence with text instruction, enabling successful deployment on real humanoid robots.
Problem

Research questions and friction points this paper is trying to address.

Translate text-driven human motions into executable robot actions
Ensure generated motions are physically feasible for real-world deployment
Maintain semantic alignment between text instructions and robot motions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Reinforcement Learning from Physical Feedback (RLPF) framework
Physics-aware motion evaluation with text-conditioned generation
Alignment verification module for semantic fidelity
🔎 Similar Papers
No similar papers found.