Learning Unified Force and Position Control for Legged Loco-Manipulation

📅 2025-05-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Legged robots struggle to jointly model contact forces and end-effector pose in rich-contact mobile manipulation tasks, especially without explicit force sensing. Method: This paper introduces the first end-to-end force–pose co-control policy that implicitly estimates contact forces and dynamically optimizes pose solely from visual and proprioceptive sequences—eliminating the need for physical force sensors. It integrates contact-aware dynamics simulation with multi-perturbation robust training to enhance trajectory imitation fidelity under diverse contact interactions. Contribution/Results: The method achieves a 39.5% success rate improvement across four challenging contact-intensive manipulation tasks. It is successfully deployed on both quadrupedal and humanoid robot platforms, demonstrating strong cross-morphology generalization and real-world robustness in physical execution.

Technology Category

Application Category

📝 Abstract
Robotic loco-manipulation tasks often involve contact-rich interactions with the environment, requiring the joint modeling of contact force and robot position. However, recent visuomotor policies often focus solely on learning position or force control, overlooking their co-learning. In this work, we propose the first unified policy for legged robots that jointly models force and position control learned without reliance on force sensors. By simulating diverse combinations of position and force commands alongside external disturbance forces, we use reinforcement learning to learn a policy that estimates forces from historical robot states and compensates for them through position and velocity adjustments. This policy enables a wide range of manipulation behaviors under varying force and position inputs, including position tracking, force application, force tracking, and compliant interactions. Furthermore, we demonstrate that the learned policy enhances trajectory-based imitation learning pipelines by incorporating essential contact information through its force estimation module, achieving approximately 39.5% higher success rates across four challenging contact-rich manipulation tasks compared to position-control policies. Extensive experiments on both a quadrupedal manipulator and a humanoid robot validate the versatility and robustness of the proposed policy across diverse scenarios.
Problem

Research questions and friction points this paper is trying to address.

Unified force and position control for legged robots
Learning force estimation without force sensors
Enhancing manipulation tasks with contact-rich interactions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Unified force and position control policy
Force estimation without force sensors
Reinforcement learning for diverse manipulation
🔎 Similar Papers
No similar papers found.
Peiyuan Zhi
Peiyuan Zhi
Unknown affiliation
P
Peiyang Li
State Key Laboratory of General Artificial Intelligence, Beijing Institute for General Artificial Intelligence (BIGAI), Beijing University of Posts and Telecommunications
J
Jianqin Yin
Beijing University of Posts and Telecommunications
Baoxiong Jia
Baoxiong Jia
Ph.D. in Computer Science, UCLA
Computer VisionArtificial Intelligence
S
Siyuan Huang
State Key Laboratory of General Artificial Intelligence, Beijing Institute for General Artificial Intelligence (BIGAI)