Efficient Active Imitation Learning with Random Network Distillation

📅 2024-11-04
🏛️ arXiv.org
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
Traditional imitation learning exhibits poor generalization on complex tasks without explicit reward functions (e.g., video games, robotic control), while active learning approaches relying on dense human demonstrations incur prohibitive annotation costs. Method: We propose a novel active imitation learning framework that integrates Random Network Distillation (RND) as a state-level out-of-distribution (OOD) detector into the DAgger pipeline. Expert intervention is triggered only upon OOD state detection—eliminating per-frame action comparison—and combined with behavioral cloning and state representation learning. Contribution/Results: Our method significantly reduces expert query overhead by 37–62% over baselines across racing, third-person navigation, and robotic locomotion tasks. It simultaneously improves policy performance and enhances cross-scenario generalization robustness, demonstrating the efficacy of RND-guided sparse expert querying in imitation learning.

Technology Category

Application Category

📝 Abstract
Developing agents for complex and underspecified tasks, where no clear objective exists, remains challenging but offers many opportunities. This is especially true in video games, where simulated players (bots) need to play realistically, and there is no clear reward to evaluate them. While imitation learning has shown promise in such domains, these methods often fail when agents encounter out-of-distribution scenarios during deployment. Expanding the training dataset is a common solution, but it becomes impractical or costly when relying on human demonstrations. This article addresses active imitation learning, aiming to trigger expert intervention only when necessary, reducing the need for constant expert input along training. We introduce Random Network Distillation DAgger (RND-DAgger), a new active imitation learning method that limits expert querying by using a learned state-based out-of-distribution measure to trigger interventions. This approach avoids frequent expert-agent action comparisons, thus making the expert intervene only when it is useful. We evaluate RND-DAgger against traditional imitation learning and other active approaches in 3D video games (racing and third-person navigation) and in a robotic locomotion task and show that RND-DAgger surpasses previous methods by reducing expert queries. https://sites.google.com/view/rnd-dagger
Problem

Research questions and friction points this paper is trying to address.

Developing agents for complex tasks without clear objectives
Reducing expert intervention in imitation learning scenarios
Improving robustness in out-of-distribution situations during deployment
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses Random Network Distillation for active learning
Triggers expert intervention only when necessary
Reduces expert queries with state-based OOD measure
🔎 Similar Papers
No similar papers found.