Robust Offline Imitation Learning from Diverse Auxiliary Data

📅 2024-10-04
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In offline imitation learning (IL), auxiliary datasets often contain unknown proportions and qualities of suboptimal or non-expert trajectories, severely degrading performance. To address this without prior assumptions about data quality or composition, we propose a robust learning framework featuring a dual-path mechanism: (1) a learned reward function identifies high-quality trajectories for weighted behavior cloning, and (2) low-quality trajectories are leveraged via TD-guided policy optimization to improve long-horizon return estimation. Crucially, our method generalizes effectively to arbitrarily mixed trajectory datasets—including those with up to 90% non-expert data—without requiring expert labels or data quality annotations. Extensive experiments across diverse heterogeneous auxiliary datasets demonstrate that our approach consistently outperforms state-of-the-art offline IL and RL+IL hybrid methods, achieving superior performance, stability, and robustness under varying data contamination levels.

Technology Category

Application Category

📝 Abstract
Offline imitation learning enables learning a policy solely from a set of expert demonstrations, without any environment interaction. To alleviate the issue of distribution shift arising due to the small amount of expert data, recent works incorporate large numbers of auxiliary demonstrations alongside the expert data. However, the performance of these approaches rely on assumptions about the quality and composition of the auxiliary data. However, they are rarely successful when those assumptions do not hold. To address this limitation, we propose Robust Offline Imitation from Diverse Auxiliary Data (ROIDA). ROIDA first identifies high-quality transitions from the entire auxiliary dataset using a learned reward function. These high-reward samples are combined with the expert demonstrations for weighted behavioral cloning. For lower-quality samples, ROIDA applies temporal difference learning to steer the policy towards high-reward states, improving long-term returns. This two-pronged approach enables our framework to effectively leverage both high and low-quality data without any assumptions. Extensive experiments validate that ROIDA achieves robust and consistent performance across multiple auxiliary datasets with diverse ratios of expert and non-expert demonstrations. ROIDA effectively leverages unlabeled auxiliary data, outperforming prior methods reliant on specific data assumptions.
Problem

Research questions and friction points this paper is trying to address.

Offline Imitation Learning
Limited Expert Demonstrations
Variable Quality Data
Innovation

Methods, ideas, or system contributions that make the work stand out.

ROIDA
Offline Imitation Learning
Quality-Robust Data Handling
🔎 Similar Papers
No similar papers found.