When a Robot is More Capable than a Human: Learning from Constrained Demonstrators

📅 2025-10-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Human expert demonstrations are often suboptimal due to indirect control, safety constraints, and other practical limitations, thereby restricting the performance of imitation learning policies. Method: We propose a state-progress-based reward inference framework that leverages temporal interpolation to self-annotate rewards for unobserved states, integrating imitation learning with sim-to-real transfer to explore higher-quality trajectories in high-dimensional spaces. Contribution/Results: Our approach eliminates reliance on action labels, instead inferring implicit task objectives solely from constrained state sequences—enabling policies that surpass demonstrator capability. Evaluated on a WidowX robotic arm, our method achieves task completion in just 12 seconds—10× faster than behavioral cloning—while significantly improving sample efficiency and generalization across unseen scenarios.

Technology Category

Application Category

📝 Abstract
Learning from demonstrations enables experts to teach robots complex tasks using interfaces such as kinesthetic teaching, joystick control, and sim-to-real transfer. However, these interfaces often constrain the expert's ability to demonstrate optimal behavior due to indirect control, setup restrictions, and hardware safety. For example, a joystick can move a robotic arm only in a 2D plane, even though the robot operates in a higher-dimensional space. As a result, the demonstrations collected by constrained experts lead to suboptimal performance of the learned policies. This raises a key question: Can a robot learn a better policy than the one demonstrated by a constrained expert? We address this by allowing the agent to go beyond direct imitation of expert actions and explore shorter and more efficient trajectories. We use the demonstrations to infer a state-only reward signal that measures task progress, and self-label reward for unknown states using temporal interpolation. Our approach outperforms common imitation learning in both sample efficiency and task completion time. On a real WidowX robotic arm, it completes the task in 12 seconds, 10x faster than behavioral cloning, as shown in real-robot videos on https://sites.google.com/view/constrainedexpert .
Problem

Research questions and friction points this paper is trying to address.

Overcoming suboptimal robot policies from constrained human demonstrations
Learning beyond direct imitation of expert actions and trajectories
Inferring reward signals from limited demonstration data
Innovation

Methods, ideas, or system contributions that make the work stand out.

Infer state-only reward from demonstrations
Self-label rewards via temporal interpolation
Explores shorter trajectories beyond imitation
🔎 Similar Papers
X
Xinhu Li
Thomas Lord Department of Computer Science, University of Southern California
A
Ayush Jain
Thomas Lord Department of Computer Science, University of Southern California
Zhaojing Yang
Zhaojing Yang
Thomas Lord Department of Computer Science, University of Southern California
Y
Yigit Korkmaz
Thomas Lord Department of Computer Science, University of Southern California
Erdem Bıyık
Erdem Bıyık
Assistant Professor, University of Southern California
RoboticsHuman-Robot InteractionMachine LearningArtificial Intelligence