VersaGen: Unleashing Versatile Visual Control for Text-to-Image Synthesis

📅 2024-12-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address insufficient fine-grained, multi-type visual control in text-to-image (T2I) generation, this paper proposes a unified controllable generation framework that requires no fine-tuning of base diffusion models (e.g., SDXL). It is the first to formulate visual control as four synergistic granularities: single-object, multi-object, background, and compositional control. The method adopts a frozen-backbone architecture augmented with lightweight visual adapters to jointly integrate multi-source conditioning signals—including sketch and segmentation maps. A three-stage inference optimization strategy is introduced: gradient reweighting, adaptive control strength scheduling, and cross-control consistency regularization. Extensive experiments on COCO and Sketchy demonstrate significant improvements over ControlNet and T2I-Adapter, enabling zero-shot compositional control. User studies show a 42% increase in perceived controllability, while layout accuracy and generation fidelity achieve state-of-the-art performance.

Technology Category

Application Category

📝 Abstract
Despite the rapid advancements in text-to-image (T2I) synthesis, enabling precise visual control remains a significant challenge. Existing works attempted to incorporate multi-facet controls (text and sketch), aiming to enhance the creative control over generated images. However, our pilot study reveals that the expressive power of humans far surpasses the capabilities of current methods. Users desire a more versatile approach that can accommodate their diverse creative intents, ranging from controlling individual subjects to manipulating the entire scene composition. We present VersaGen, a generative AI agent that enables versatile visual control in T2I synthesis. VersaGen admits four types of visual controls: i) single visual subject; ii) multiple visual subjects; iii) scene background; iv) any combination of the three above or merely no control at all. We train an adaptor upon a frozen T2I model to accommodate the visual information into the text-dominated diffusion process. We introduce three optimization strategies during the inference phase of VersaGen to improve generation results and enhance user experience. Comprehensive experiments on COCO and Sketchy validate the effectiveness and flexibility of VersaGen, as evidenced by both qualitative and quantitative results.
Problem

Research questions and friction points this paper is trying to address.

Text-to-Image Synthesis
Image Detail Control
Creativity and Flexibility Limitations
Innovation

Methods, ideas, or system contributions that make the work stand out.

VersaGen
AI-driven T2I customization
Enhanced visual experience
🔎 Similar Papers
No similar papers found.
Z
Zhipeng Chen
School of Artificial Intelligence, Beijing University of Posts and Telecommunications, China
Lan Yang
Lan Yang
Edwin & Florence Skinner Professor, Electrical & Systems Engineering, Washington Univ. in St Louis
resonatorlasernonlinear opticssensingnon-Hermitian physics
Yonggang Qi
Yonggang Qi
Associate Professor, Beijing University of Posts and Telecommunications
computer visionsketch-based vision learning algorithms and applications
H
Honggang Zhang
School of Artificial Intelligence, Beijing University of Posts and Telecommunications, China
Kaiyue Pang
Kaiyue Pang
SketchX, CVSSP, University of Surrey
Computer VisionMachine LearningArtificial Intelligence
K
Ke Li
School of Artificial Intelligence, Beijing University of Posts and Telecommunications, China; SketchX, CVSSP, University of Surrey, United Kingdom
Yi-Zhe Song
Yi-Zhe Song
SketchX Lab, CVSSP, University of Surrey
Computer VisionComputer GraphicsMachine LearningArtificial Intelligence