Diffusing States and Matching Scores: A New Framework for Imitation Learning

📅 2024-10-17
🏛️ arXiv.org
📈 Citations: 2
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the instability and error accumulation issues in imitation learning caused by unreliable distribution matching. We propose the first offline imitation learning framework that integrates diffusion models into sequential decision-making. Methodologically, we abandon adversarial training and instead construct a diffusion process in the state space, directly aligning the state distributions of expert and policy trajectories via noise-conditional score function regression and temporal score matching. Theoretically, we prove that the policy’s error bound grows linearly with the task horizon, and our approach completely eliminates discriminators, thereby significantly improving training stability and generalization. Empirically, on continuous-control benchmarks—including bipedal walking, crawling, and obstacle avoidance—our method consistently outperforms both GAN-based and discriminator-free baselines, yielding higher-fidelity trajectory generation and markedly more robust training.

Technology Category

Application Category

📝 Abstract
Adversarial Imitation Learning is traditionally framed as a two-player zero-sum game between a learner and an adversarially chosen cost function, and can therefore be thought of as the sequential generalization of a Generative Adversarial Network (GAN). However, in recent years, diffusion models have emerged as a non-adversarial alternative to GANs that merely require training a score function via regression, yet produce generations of higher quality. In response, we investigate how to lift insights from diffusion modeling to the sequential setting. We propose diffusing states and performing score-matching along diffused states to measure the discrepancy between the expert's and learner's states. Thus, our approach only requires training score functions to predict noises via standard regression, making it significantly easier and more stable to train than adversarial methods. Theoretically, we prove first- and second-order instance-dependent bounds with linear scaling in the horizon, proving that our approach avoids the compounding errors that stymie offline approaches to imitation learning. Empirically, we show our approach outperforms both GAN-style imitation learning baselines and discriminator-free imitation learning baselines across various continuous control problems, including complex tasks like controlling humanoids to walk, sit, crawl, and navigate through obstacles.
Problem

Research questions and friction points this paper is trying to address.

Proposes a non-adversarial imitation learning framework using diffusion models.
Measures state discrepancy via score-matching on diffused states for stability.
Outperforms GAN-style and discriminator-free baselines in continuous control tasks.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses diffusion models for imitation learning
Trains score functions via standard regression
Avoids compounding errors in sequential tasks
🔎 Similar Papers
No similar papers found.