Imitation Learning from Suboptimal Demonstrations via Meta-Learning An Action Ranker

πŸ“… 2024-12-28
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This paper addresses the high cost and poor generalization of imitation learning due to its reliance on high-quality expert demonstrations. To mitigate this dependency, we propose a meta-learning framework tailored for suboptimal demonstrations. Our method integrates weighted behavioral cloning with explicit policy distance regularization. Specifically, (1) we design the first meta-learned action ranker that dynamically reweights non-expert demonstrations using an advantage function; and (2) we introduce a learnable meta-objective that explicitly constrains the learned policy’s divergence from the expert policy. Evaluated on multi-task benchmarks, our approach significantly outperforms existing methods for learning from suboptimal demonstrations, achieving higher demonstration efficiency and improved policy performance while substantially reducing reliance on expert data.

Technology Category

Application Category

πŸ“ Abstract
A major bottleneck in imitation learning is the requirement of a large number of expert demonstrations, which can be expensive or inaccessible. Learning from supplementary demonstrations without strict quality requirements has emerged as a powerful paradigm to address this challenge. However, previous methods often fail to fully utilize their potential by discarding non-expert data. Our key insight is that even demonstrations that fall outside the expert distribution but outperform the learned policy can enhance policy performance. To utilize this potential, we propose a novel approach named imitation learning via meta-learning an action ranker (ILMAR). ILMAR implements weighted behavior cloning (weighted BC) on a limited set of expert demonstrations along with supplementary demonstrations. It utilizes the functional of the advantage function to selectively integrate knowledge from the supplementary demonstrations. To make more effective use of supplementary demonstrations, we introduce meta-goal in ILMAR to optimize the functional of the advantage function by explicitly minimizing the distance between the current policy and the expert policy. Comprehensive experiments using extensive tasks demonstrate that ILMAR significantly outperforms previous methods in handling suboptimal demonstrations. Code is available at https://github.com/F-GOD6/ILMAR.
Problem

Research questions and friction points this paper is trying to address.

Imitation Learning
Imperfect Demonstrations
Expert Dependency Reduction
Innovation

Methods, ideas, or system contributions that make the work stand out.

Meta-Learning
Imitation Learning
Demonstration Weighting
πŸ”Ž Similar Papers
No similar papers found.
J
Jiangdong Fan
University of Electronic Science and Technology of China, Chengdu, China
H
Hongcai He
University of Electronic Science and Technology of China, Chengdu, China
Paul Weng
Paul Weng
Duke Kunshan University
Artificial IntelligenceReinforcement Learning/Markov Decision ProcessQualitative/Ordinal Models
H
Hui Xu
University of Electronic Science and Technology of China, Chengdu, China
Jie Shao
Jie Shao
Professor, University of Electronic Science and Technology of China
MultimediaDatabase