Learning to Assist: Physics-Grounded Human-Human Control via Multi-Agent Reinforcement Learning

πŸ“… 2026-03-11
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the challenge that existing virtual characters and humanoid robots struggle to simulate physically interactive assistance behaviors requiring continuous perception and dynamic adaptation. The authors formulate human-like close-contact force interactions as a multi-agent reinforcement learning problem, jointly training the policies of a supporter and a recipient in physical simulation to track a reference sequence of assistive motions. By introducing partner policy initialization, dynamic reference redirection, and a contact-aware reward mechanism, the approach achieves, for the first time, successful tracking of complex assistive interaction tasks, overcoming the limitations of single-agent methods. Experiments on standard benchmarks demonstrate the effectiveness of the proposed framework in embodied, socially aware humanoid control.

Technology Category

Application Category

πŸ“ Abstract
Humanoid robotics has strong potential to transform daily service and caregiving applications. Although recent advances in general motion tracking within physics engines (GMT) have enabled virtual characters and humanoid robots to reproduce a broad range of human motions, these behaviors are primarily limited to contact-less social interactions or isolated movements. Assistive scenarios, by contrast, require continuous awareness of a human partner and rapid adaptation to their evolving posture and dynamics. In this paper, we formulate the imitation of closely interacting, force-exchanging human-human motion sequences as a multi-agent reinforcement learning problem. We jointly train partner-aware policies for both the supporter (assistant) agent and the recipient agent in a physics simulator to track assistive motion references. To make this problem tractable, we introduce a partner policies initialization scheme that transfers priors from single-human motion-tracking controllers, greatly improving exploration. We further propose dynamic reference retargeting and contact-promoting reward, which adapt the assistant's reference motion to the recipient's real-time pose and encourage physically meaningful support. We show that AssistMimic is the first method capable of successfully tracking assistive interaction motions on established benchmarks, demonstrating the benefits of a multi-agent RL formulation for physically grounded and socially aware humanoid control.
Problem

Research questions and friction points this paper is trying to address.

human-human interaction
assistive motion
multi-agent reinforcement learning
physics-based simulation
humanoid control
Innovation

Methods, ideas, or system contributions that make the work stand out.

multi-agent reinforcement learning
physics-grounded humanoid control
assistive interaction
dynamic reference retargeting
contact-promoting reward
πŸ”Ž Similar Papers
No similar papers found.