Latent Embedding Adaptation for Human Preference Alignment in Diffusion Planners

๐Ÿ“… 2025-03-24
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work addresses the challenge of personalizing trajectory generation in automated decision-making systems. We propose a lightweight preference alignment method based on a pretrained conditional diffusion model. Our core innovations are: (i) a learnable preference latent embedding (PLE) and (ii) a gradient-driven preference inversion optimization mechanism that requires neither reward signals nor online interaction. Operating within an offline, reward-free pretraining paradigm, our method achieves user preference adaptation in a single optimization stepโ€”reducing computational overhead by over 90%. Evaluated on real human preference benchmarks, it significantly improves the match rate of high-value trajectories and outperforms mainstream alignment approaches including RLHF and LoRA. The method establishes a new paradigm for low-cost, high-fidelity personalized trajectory generation.

Technology Category

Application Category

๐Ÿ“ Abstract
This work addresses the challenge of personalizing trajectories generated in automated decision-making systems by introducing a resource-efficient approach that enables rapid adaptation to individual users' preferences. Our method leverages a pretrained conditional diffusion model with Preference Latent Embeddings (PLE), trained on a large, reward-free offline dataset. The PLE serves as a compact representation for capturing specific user preferences. By adapting the pretrained model using our proposed preference inversion method, which directly optimizes the learnable PLE, we achieve superior alignment with human preferences compared to existing solutions like Reinforcement Learning from Human Feedback (RLHF) and Low-Rank Adaptation (LoRA). To better reflect practical applications, we create a benchmark experiment using real human preferences on diverse, high-reward trajectories.
Problem

Research questions and friction points this paper is trying to address.

Personalizing automated decision-making trajectories for individual preferences
Adapting pretrained diffusion models with compact Preference Latent Embeddings
Aligning generated trajectories with human preferences efficiently
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses Preference Latent Embeddings (PLE) for user preferences
Adapts pretrained model via preference inversion method
Outperforms RLHF and LoRA in human alignment
๐Ÿ”Ž Similar Papers
No similar papers found.
W
Wen Zheng Terence Ng
Nanyang Technological University, Continental Automotive Singapore
Jianda Chen
Jianda Chen
Nanyang Technological Univeristy
Y
Yuan Xu
Nanyang Technological University
T
Tianwei Zhang
Nanyang Technological University