Latent Diffusion Planning for Imitation Learning

📅 2025-04-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Imitation learning heavily relies on abundant high-quality expert demonstrations, struggling to leverage action-unlabeled or suboptimal demonstration data. Method: We propose a latent-space diffusion-based planning framework—the first to employ diffusion models for state-sequence planning in a learned latent space—while decoupling planning from action generation. Using a VAE, we construct a compact latent space where state prediction and inverse dynamics are jointly modeled, enabling end-to-end training on both action-free and suboptimal demonstrations. Contribution/Results: Evaluated on vision-based robotic manipulation simulation tasks, our method significantly outperforms state-of-the-art approaches, demonstrating robust performance with low-quality and weakly labeled demonstrations. It establishes a new paradigm for reducing imitation learning’s dependence on large-scale, high-fidelity expert data.

Technology Category

Application Category

📝 Abstract
Recent progress in imitation learning has been enabled by policy architectures that scale to complex visuomotor tasks, multimodal distributions, and large datasets. However, these methods often rely on learning from large amount of expert demonstrations. To address these shortcomings, we propose Latent Diffusion Planning (LDP), a modular approach consisting of a planner which can leverage action-free demonstrations, and an inverse dynamics model which can leverage suboptimal data, that both operate over a learned latent space. First, we learn a compact latent space through a variational autoencoder, enabling effective forecasting of future states in image-based domains. Then, we train a planner and an inverse dynamics model with diffusion objectives. By separating planning from action prediction, LDP can benefit from the denser supervision signals of suboptimal and action-free data. On simulated visual robotic manipulation tasks, LDP outperforms state-of-the-art imitation learning approaches, as they cannot leverage such additional data.
Problem

Research questions and friction points this paper is trying to address.

Learning from limited expert demonstrations
Leveraging action-free and suboptimal data
Improving imitation learning in visual robotic tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Latent space learning via variational autoencoder
Diffusion-based planner and inverse dynamics
Modular approach for diverse data utilization
🔎 Similar Papers
No similar papers found.