Make-An-Agent: A Generalizable Policy Network Generator with Behavior-Prompted Diffusion

๐Ÿ“… 2024-07-15
๐Ÿ›๏ธ Neural Information Processing Systems
๐Ÿ“ˆ Citations: 2
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Can executable control policies be generated from a single behavioral demonstration? This paper proposes a behavior-prompted diffusion-based policy generation paradigm: demonstration trajectories are encoded as conditioning signals, and a conditional diffusion model directly synthesizes policy-parameter representations in latent space, which are then decoded into deployable policy networks via a lightweight decoder. The method achieves, for the first time, zero-shot cross-task and cross-robot-platform transfer, as well as few-shot generalization, and successfully deploys end-to-end policies on a real quadrupedal robot. In diverse simulated and real-world locomotion tasks, high performance is attained using only 1โ€“3 demonstrations; on unseen tasks, average performance reaches 92% of expert-policy performance. The core innovation lies in formulating policy generation as a conditional diffusion processโ€”unifying behavioral prompting, latent representation learning, and policy decoding within a single coherent framework.

Technology Category

Application Category

๐Ÿ“ Abstract
Can we generate a control policy for an agent using just one demonstration of desired behaviors as a prompt, as effortlessly as creating an image from a textual description? In this paper, we present Make-An-Agent, a novel policy parameter generator that leverages the power of conditional diffusion models for behavior-to-policy generation. Guided by behavior embeddings that encode trajectory information, our policy generator synthesizes latent parameter representations, which can then be decoded into policy networks. Trained on policy network checkpoints and their corresponding trajectories, our generation model demonstrates remarkable versatility and scalability on multiple tasks and has a strong generalization ability on unseen tasks to output well-performed policies with only few-shot demonstrations as inputs. We showcase its efficacy and efficiency on various domains and tasks, including varying objectives, behaviors, and even across different robot manipulators. Beyond simulation, we directly deploy policies generated by Make-An-Agent onto real-world robots on locomotion tasks. Project page: https://cheryyunl.github.io/make-an-agent/
Problem

Research questions and friction points this paper is trying to address.

Generates control policies from single behavior demonstrations
Uses diffusion models for behavior-to-policy synthesis
Generalizes across tasks and robots with few-shot inputs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Behavior-prompted diffusion for policy generation
Latent parameter synthesis from behavior embeddings
Few-shot generalization across diverse tasks
๐Ÿ”Ž Similar Papers
No similar papers found.