AnyStory: Towards Unified Single and Multiple Subject Personalization in Text-to-Image Generation

📅 2025-01-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing text-to-image methods struggle to simultaneously achieve high identity fidelity and precise text alignment when generating images containing multiple specific real-world individuals. To address this, we propose a unified “encoding–routing” framework: (1) a ReferenceNet jointly optimized with CLIP encodes single- or multi-subject visual features; (2) a decoupled, instance-aware router enables subject-level accurate localization and conditional routing; and (3) fine-grained conditional injection is performed in the latent space. Our approach is the first to unify identity fidelity across both single- and multi-subject personalized generation, overcoming longstanding bottlenecks in multi-subject localization and conditional injection. Experiments demonstrate consistent superiority over state-of-the-art methods across identity consistency, text–image alignment, and controllability for complex narratives—validated by significant improvements in both qualitative and quantitative metrics.

Technology Category

Application Category

📝 Abstract
Recently, large-scale generative models have demonstrated outstanding text-to-image generation capabilities. However, generating high-fidelity personalized images with specific subjects still presents challenges, especially in cases involving multiple subjects. In this paper, we propose AnyStory, a unified approach for personalized subject generation. AnyStory not only achieves high-fidelity personalization for single subjects, but also for multiple subjects, without sacrificing subject fidelity. Specifically, AnyStory models the subject personalization problem in an"encode-then-route"manner. In the encoding step, AnyStory utilizes a universal and powerful image encoder, i.e., ReferenceNet, in conjunction with CLIP vision encoder to achieve high-fidelity encoding of subject features. In the routing step, AnyStory utilizes a decoupled instance-aware subject router to accurately perceive and predict the potential location of the corresponding subject in the latent space, and guide the injection of subject conditions. Detailed experimental results demonstrate the excellent performance of our method in retaining subject details, aligning text descriptions, and personalizing for multiple subjects. The project page is at https://aigcdesigngroup.github.io/AnyStory/ .
Problem

Research questions and friction points this paper is trying to address.

Text-to-image synthesis
Personalized character generation
Multiple character representation
Innovation

Methods, ideas, or system contributions that make the work stand out.

AnyStory
CLIP Visual Encoder
Smart Localization Technology
🔎 Similar Papers
No similar papers found.
Junjie He
Junjie He
Guizhou University
MRIDeep LearningCT
Y
Yuxiang Tuo
Institute for Intelligent Computing, Alibaba Tongyi Lab
Binghui Chen
Binghui Chen
Beijing University of Posts and Telecommunications
deep learningmachine learningcomputer vision
C
Chongyang Zhong
Institute for Intelligent Computing, Alibaba Tongyi Lab
Y
Yifeng Geng
Institute for Intelligent Computing, Alibaba Tongyi Lab
Liefeng Bo
Liefeng Bo
Head of Applied Computer Vision Lab at Alibaba Group
Machine LearningComputer VisionRobotics