🤖 AI Summary
Existing personalized image generation methods—particularly those leveraging large multimodal models (LMMs)—suffer from visual feature entanglement, hindering simultaneous fidelity to user-specific stylistic preferences and intended semantics, often resulting in “guidance collapse.” To address this, we propose a decoupled representation composition framework that enhances LMMs. Our approach introduces a dual-tower disentangler to explicitly separate style and semantic representations; incorporates reconstruction-driven training, difficulty-aware importance sampling, and semantic-preserving data augmentation; and jointly fine-tunes the LMM. By fundamentally disentangling representations at the latent level, our method eliminates guidance collapse at its root. Evaluated on two major benchmarks, it achieves significant improvements in both style fidelity and semantic consistency—generating images that faithfully reproduce both user-defined styles and target semantics—setting new state-of-the-art performance.
📝 Abstract
Personalized image generation has emerged as a promising direction in multimodal content creation. It aims to synthesize images tailored to individual style preferences (e.g., color schemes, character appearances, layout) and semantic intentions (e.g., emotion, action, scene contexts) by leveraging user-interacted history images and multimodal instructions. Despite notable progress, existing methods -- whether based on diffusion models, large language models, or Large Multimodal Models (LMMs) -- struggle to accurately capture and fuse user style preferences and semantic intentions. In particular, the state-of-the-art LMM-based method suffers from the entanglement of visual features, leading to Guidance Collapse, where the generated images fail to preserve user-preferred styles or reflect the specified semantics. To address these limitations, we introduce DRC, a novel personalized image generation framework that enhances LMMs through Disentangled Representation Composition. DRC explicitly extracts user style preferences and semantic intentions from history images and the reference image, respectively, to form user-specific latent instructions that guide image generation within LMMs. Specifically, it involves two critical learning stages: 1) Disentanglement learning, which employs a dual-tower disentangler to explicitly separate style and semantic features, optimized via a reconstruction-driven paradigm with difficulty-aware importance sampling; and 2) Personalized modeling, which applies semantic-preserving augmentations to effectively adapt the disentangled representations for robust personalized generation. Extensive experiments on two benchmarks demonstrate that DRC shows competitive performance while effectively mitigating the guidance collapse issue, underscoring the importance of disentangled representation learning for controllable and effective personalized image generation.