🤖 AI Summary
Current text-to-image diffusion models lack the capability to model individual users’ aesthetic preferences, hindering precise personalized image editing. To address this, we propose the first collaborative preference optimization framework for personalized image editing: it constructs a dynamic user preference graph, employs a lightweight graph neural network to model inter-user similarity, and introduces an enhanced DPO objective that integrates neighborhood consistency—jointly optimizing individual alignment and group-level collaboration during diffusion-based editing. Our method requires no model parameter fine-tuning or additional training; instead, it achieves efficient customization solely through preference guidance. Extensive user studies and quantitative evaluations (e.g., CLIP-Score, FID) across multiple benchmarks demonstrate that our approach significantly outperforms state-of-the-art baselines, improving preference consistency by 23.6% (p < 0.01).
📝 Abstract
Text-to-image (T2I) diffusion models have made remarkable strides in generating and editing high-fidelity images from text. Yet, these models remain fundamentally generic, failing to adapt to the nuanced aesthetic preferences of individual users. In this work, we present the first framework for personalized image editing in diffusion models, introducing Collaborative Direct Preference Optimization (C-DPO), a novel method that aligns image edits with user-specific preferences while leveraging collaborative signals from like-minded individuals. Our approach encodes each user as a node in a dynamic preference graph and learns embeddings via a lightweight graph neural network, enabling information sharing across users with overlapping visual tastes. We enhance a diffusion model's editing capabilities by integrating these personalized embeddings into a novel DPO objective, which jointly optimizes for individual alignment and neighborhood coherence. Comprehensive experiments, including user studies and quantitative benchmarks, demonstrate that our method consistently outperforms baselines in generating edits that are aligned with user preferences.