๐ค AI Summary
This work addresses the challenge of personalizing trajectory generation in automated decision-making systems. We propose a lightweight preference alignment method based on a pretrained conditional diffusion model. Our core innovations are: (i) a learnable preference latent embedding (PLE) and (ii) a gradient-driven preference inversion optimization mechanism that requires neither reward signals nor online interaction. Operating within an offline, reward-free pretraining paradigm, our method achieves user preference adaptation in a single optimization stepโreducing computational overhead by over 90%. Evaluated on real human preference benchmarks, it significantly improves the match rate of high-value trajectories and outperforms mainstream alignment approaches including RLHF and LoRA. The method establishes a new paradigm for low-cost, high-fidelity personalized trajectory generation.
๐ Abstract
This work addresses the challenge of personalizing trajectories generated in automated decision-making systems by introducing a resource-efficient approach that enables rapid adaptation to individual users' preferences. Our method leverages a pretrained conditional diffusion model with Preference Latent Embeddings (PLE), trained on a large, reward-free offline dataset. The PLE serves as a compact representation for capturing specific user preferences. By adapting the pretrained model using our proposed preference inversion method, which directly optimizes the learnable PLE, we achieve superior alignment with human preferences compared to existing solutions like Reinforcement Learning from Human Feedback (RLHF) and Low-Rank Adaptation (LoRA). To better reflect practical applications, we create a benchmark experiment using real human preferences on diverse, high-reward trajectories.