🤖 AI Summary
Human expert demonstrations are often suboptimal due to indirect control, safety constraints, and other practical limitations, thereby restricting the performance of imitation learning policies.
Method: We propose a state-progress-based reward inference framework that leverages temporal interpolation to self-annotate rewards for unobserved states, integrating imitation learning with sim-to-real transfer to explore higher-quality trajectories in high-dimensional spaces.
Contribution/Results: Our approach eliminates reliance on action labels, instead inferring implicit task objectives solely from constrained state sequences—enabling policies that surpass demonstrator capability. Evaluated on a WidowX robotic arm, our method achieves task completion in just 12 seconds—10× faster than behavioral cloning—while significantly improving sample efficiency and generalization across unseen scenarios.
📝 Abstract
Learning from demonstrations enables experts to teach robots complex tasks using interfaces such as kinesthetic teaching, joystick control, and sim-to-real transfer. However, these interfaces often constrain the expert's ability to demonstrate optimal behavior due to indirect control, setup restrictions, and hardware safety. For example, a joystick can move a robotic arm only in a 2D plane, even though the robot operates in a higher-dimensional space. As a result, the demonstrations collected by constrained experts lead to suboptimal performance of the learned policies. This raises a key question: Can a robot learn a better policy than the one demonstrated by a constrained expert? We address this by allowing the agent to go beyond direct imitation of expert actions and explore shorter and more efficient trajectories. We use the demonstrations to infer a state-only reward signal that measures task progress, and self-label reward for unknown states using temporal interpolation. Our approach outperforms common imitation learning in both sample efficiency and task completion time. On a real WidowX robotic arm, it completes the task in 12 seconds, 10x faster than behavioral cloning, as shown in real-robot videos on https://sites.google.com/view/constrainedexpert .