Latent Weight Diffusion: Generating Policies from Trajectories

📅 2024-10-17
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Diffusion-based policy models for robotic imitation learning suffer from excessive parameter counts, slow inference, and a pronounced trade-off between trajectory accuracy and action step length. To address these issues, we propose Latent Policy Diffusion (LPD), the first approach to apply diffusion modeling not to explicit trajectories but to the latent distribution of policy weights—decoded via a hypernetwork into lightweight, closed-loop policies. LPD integrates latent-space encoding, policy-parameterized modeling, and diffusion-based denoising to jointly optimize policy distribution learning and efficient decoding. On Meta-World MT10, LPD achieves higher success rates than multitask baselines, reduces model size by approximately 18×, outperforms Diffusion Policy under long-horizon action steps, and significantly lowers the number of diffusion evaluation steps—thereby balancing generalization capability with real-time execution.

Technology Category

Application Category

📝 Abstract
With the increasing availability of open-source robotic data, imitation learning has emerged as a viable approach for both robot manipulation and locomotion. Currently, large generalized policies are trained to predict controls or trajectories using diffusion models, which have the desirable property of learning multimodal action distributions. However, generalizability comes with a cost - namely, larger model size and slower inference. Further, there is a known trade-off between performance and action horizon for Diffusion Policy (i.e., diffusing trajectories): fewer diffusion queries accumulate greater trajectory tracking errors. Thus, it is common practice to run these models at high inference frequency, subject to robot computational constraints. To address these limitations, we propose Latent Weight Diffusion (LWD), a method that uses diffusion to learn a distribution over policies for robotic tasks, rather than over trajectories. Our approach encodes demonstration trajectories into a latent space and then decodes them into policies using a hypernetwork. We employ a diffusion denoising model within this latent space to learn its distribution. We demonstrate that LWD can reconstruct the behaviors of the original policies that generated the trajectory dataset. LWD offers the benefits of considerably smaller policy networks during inference and requires fewer diffusion model queries. When tested on the Metaworld MT10 benchmark, LWD achieves a higher success rate compared to a vanilla multi-task policy, while using models up to ~18x smaller during inference. Additionally, since LWD generates closed-loop policies, we show that it outperforms Diffusion Policy in long action horizon settings, with reduced diffusion queries during rollout.
Problem

Research questions and friction points this paper is trying to address.

Generating reactive policies to replace trajectory-based diffusion models
Reducing inference cost while maintaining high performance in robotics
Improving robustness to perturbations and longer action horizons
Innovation

Methods, ideas, or system contributions that make the work stand out.

Generates neural policy weights via diffusion
Enables longer action horizons efficiently
Reduces inference compute cost significantly
🔎 Similar Papers
No similar papers found.