π€ AI Summary
This paper addresses the high cost and poor generalization of imitation learning due to its reliance on high-quality expert demonstrations. To mitigate this dependency, we propose a meta-learning framework tailored for suboptimal demonstrations. Our method integrates weighted behavioral cloning with explicit policy distance regularization. Specifically, (1) we design the first meta-learned action ranker that dynamically reweights non-expert demonstrations using an advantage function; and (2) we introduce a learnable meta-objective that explicitly constrains the learned policyβs divergence from the expert policy. Evaluated on multi-task benchmarks, our approach significantly outperforms existing methods for learning from suboptimal demonstrations, achieving higher demonstration efficiency and improved policy performance while substantially reducing reliance on expert data.
π Abstract
A major bottleneck in imitation learning is the requirement of a large number of expert demonstrations, which can be expensive or inaccessible. Learning from supplementary demonstrations without strict quality requirements has emerged as a powerful paradigm to address this challenge. However, previous methods often fail to fully utilize their potential by discarding non-expert data. Our key insight is that even demonstrations that fall outside the expert distribution but outperform the learned policy can enhance policy performance. To utilize this potential, we propose a novel approach named imitation learning via meta-learning an action ranker (ILMAR). ILMAR implements weighted behavior cloning (weighted BC) on a limited set of expert demonstrations along with supplementary demonstrations. It utilizes the functional of the advantage function to selectively integrate knowledge from the supplementary demonstrations. To make more effective use of supplementary demonstrations, we introduce meta-goal in ILMAR to optimize the functional of the advantage function by explicitly minimizing the distance between the current policy and the expert policy. Comprehensive experiments using extensive tasks demonstrate that ILMAR significantly outperforms previous methods in handling suboptimal demonstrations. Code is available at https://github.com/F-GOD6/ILMAR.