Stabilizing Humanoid Robot Trajectory Generation via Physics-Informed Learning and Control-Informed Steering

📅 2025-09-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address trajectory divergence, sliding contact, and insufficient stability in humanoid robot trajectory generation—caused by data scarcity and lack of physical priors—this paper proposes a learning-control co-optimization framework. Methodologically: (1) Physical prior losses—e.g., zero foot-end contact velocity constraints—are incorporated into supervised imitation learning to enhance trajectory physical consistency; (2) During inference, proportional-integral output feedback control is integrated to suppress motion drift in real time. The key innovation lies in embedding differentiable physics modeling and closed-loop feedback control directly into the end-to-end trajectory learning pipeline, thereby jointly ensuring dynamic feasibility and contact stability. Experimental validation on the ergoCub platform demonstrates significant improvements in trajectory accuracy and robustness under real-world conditions, while maintaining compatibility with diverse low-level controllers.

Technology Category

Application Category

📝 Abstract
Recent trends in humanoid robot control have successfully employed imitation learning to enable the learned generation of smooth, human-like trajectories from human data. While these approaches make more realistic motions possible, they are limited by the amount of available motion data, and do not incorporate prior knowledge about the physical laws governing the system and its interactions with the environment. Thus they may violate such laws, leading to divergent trajectories and sliding contacts which limit real-world stability. We address such limitations via a two-pronged learning strategy which leverages the known physics of the system and fundamental control principles. First, we encode physics priors during supervised imitation learning to promote trajectory feasibility. Second, we minimize drift at inference time by applying a proportional-integral controller directly to the generated output state. We validate our method on various locomotion behaviors for the ergoCub humanoid robot, where a physics-informed loss encourages zero contact foot velocity. Our experiments demonstrate that the proposed approach is compatible with multiple controllers on a real robot and significantly improves the accuracy and physical constraint conformity of generated trajectories.
Problem

Research questions and friction points this paper is trying to address.

Generating stable humanoid robot trajectories from limited motion data
Preventing physical law violations causing divergent trajectories and sliding contacts
Improving trajectory feasibility through physics-informed learning and control principles
Innovation

Methods, ideas, or system contributions that make the work stand out.

Physics-informed learning for trajectory feasibility
Control-informed steering to minimize drift
Supervised imitation learning with physics priors
🔎 Similar Papers
No similar papers found.