🤖 AI Summary
To address semantic entanglement and fine-detail degradation in class-conditional diffusion models for fine-grained image generation, this paper proposes a hierarchical controllable generation framework. Methodologically: (1) a hierarchical embedder jointly models superclass–subclass semantics to alleviate entanglement; (2) ProAttention—a computationally efficient attention mechanism—is introduced to accelerate diffusion Transformers; (3) an end-to-end trainable joint enhancement-degradation module is designed, integrated with super-resolution strategies during the perceptual generation stage to improve detail fidelity. Evaluated on CUB and Oxford-Flowers benchmarks, our method significantly outperforms existing fine-tuning approaches, achieving state-of-the-art performance in FID, CLIP-Score, and human evaluation. It enables simultaneous enhancement of fine-grained semantic consistency and high-fidelity visual detail.
📝 Abstract
Diffusion models are highly regarded for their controllability and the diversity of images they generate. However, class-conditional generation methods based on diffusion models often focus on more common categories. In large-scale fine-grained image generation, issues of semantic information entanglement and insufficient detail in the generated images still persist. This paper attempts to introduce a concept of a tiered embedder in fine-grained image generation, which integrates semantic information from both super and child classes, allowing the diffusion model to better incorporate semantic information and address the issue of semantic entanglement. To address the issue of insufficient detail in fine-grained images, we introduce the concept of super-resolution during the perceptual information generation stage, enhancing the detailed features of fine-grained images through enhancement and degradation models. Furthermore, we propose an efficient ProAttention mechanism that can be effectively implemented in the diffusion model. We evaluate our method through extensive experiments on public benchmarks, demonstrating that our approach outperforms other state-of-the-art fine-tuning methods in terms of performance.