HiGFA: Hierarchical Guidance for Fine-grained Data Augmentation with Diffusion Models

📅 2025-11-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In fine-grained image classification, conventional data augmentation often produces misleading samples that degrade classifier performance. To address this, we propose a hierarchical guided diffusion generation framework. Our method jointly incorporates three complementary guidance signals—textual semantics, edge contours, and feedback from a fine-grained classifier—during the denoising sampling process. We introduce two key innovations: (i) a confidence-weighted multimodal fusion mechanism that dynamically balances guidance contributions, and (ii) a timestep-aware modulation strategy that adaptively emphasizes structural coherence at early stages and discriminative local details (e.g., texture, morphology) at later stages. Extensive experiments on standard benchmarks—including CUB-200 and FGVC-Aircraft—demonstrate that our generated samples consistently improve downstream classifier accuracy by an average of +2.3%, while preserving high fidelity, diversity, and generalization capability.

Technology Category

Application Category

📝 Abstract
Generative diffusion models show promise for data augmentation. However, applying them to fine-grained tasks presents a significant challenge: ensuring synthetic images accurately capture the subtle, category-defining features critical for high fidelity. Standard approaches, such as text-based Classifier-Free Guidance (CFG), often lack the required specificity, potentially generating misleading examples that degrade fine-grained classifier performance. To address this, we propose Hierarchically Guided Fine-grained Augmentation (HiGFA). HiGFA leverages the temporal dynamics of the diffusion sampling process. It employs strong text and transformed contour guidance with fixed strengths in the early-to-mid sampling stages to establish overall scene, style, and structure. In the final sampling stages, HiGFA activates a specialized fine-grained classifier guidance and dynamically modulates the strength of all guidance signals based on prediction confidence. This hierarchical, confidence-driven orchestration enables HiGFA to generate diverse yet faithful synthetic images by intelligently balancing global structure formation with precise detail refinement. Experiments on several FGVC datasets demonstrate the effectiveness of HiGFA.
Problem

Research questions and friction points this paper is trying to address.

Enhancing fine-grained image fidelity with diffusion models
Addressing specificity limitations in text-based classifier guidance
Balancing global structure and fine details in synthetic images
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hierarchical guidance orchestrates diffusion sampling stages
Early stages use strong text and contour guidance
Final stages apply fine-grained classifier with dynamic modulation
🔎 Similar Papers
No similar papers found.
Z
Zhiguang Lu
State Key Laboratory of AI Safety, Institute of Computing Technology, Chinese Academy of Sciences
Q
Qianqian Xu
State Key Laboratory of AI Safety, Institute of Computing Technology, Chinese Academy of Sciences
Peisong Wen
Peisong Wen
University of Chinese Academy of Sciences
machine learningcomputer vision
S
Siran Da
Institute of Information Engineering, Chinese Academy of Sciences
Qingming Huang
Qingming Huang
University of the Chinese Academy of Sciences
Multimedia Analysis and RetrievalImage and Video ProcessingPattern RecognitionComputer VisionVideo Coding