🤖 AI Summary
In fine-grained image classification, conventional data augmentation often produces misleading samples that degrade classifier performance. To address this, we propose a hierarchical guided diffusion generation framework. Our method jointly incorporates three complementary guidance signals—textual semantics, edge contours, and feedback from a fine-grained classifier—during the denoising sampling process. We introduce two key innovations: (i) a confidence-weighted multimodal fusion mechanism that dynamically balances guidance contributions, and (ii) a timestep-aware modulation strategy that adaptively emphasizes structural coherence at early stages and discriminative local details (e.g., texture, morphology) at later stages. Extensive experiments on standard benchmarks—including CUB-200 and FGVC-Aircraft—demonstrate that our generated samples consistently improve downstream classifier accuracy by an average of +2.3%, while preserving high fidelity, diversity, and generalization capability.
📝 Abstract
Generative diffusion models show promise for data augmentation. However, applying them to fine-grained tasks presents a significant challenge: ensuring synthetic images accurately capture the subtle, category-defining features critical for high fidelity. Standard approaches, such as text-based Classifier-Free Guidance (CFG), often lack the required specificity, potentially generating misleading examples that degrade fine-grained classifier performance. To address this, we propose Hierarchically Guided Fine-grained Augmentation (HiGFA). HiGFA leverages the temporal dynamics of the diffusion sampling process. It employs strong text and transformed contour guidance with fixed strengths in the early-to-mid sampling stages to establish overall scene, style, and structure. In the final sampling stages, HiGFA activates a specialized fine-grained classifier guidance and dynamically modulates the strength of all guidance signals based on prediction confidence. This hierarchical, confidence-driven orchestration enables HiGFA to generate diverse yet faithful synthetic images by intelligently balancing global structure formation with precise detail refinement. Experiments on several FGVC datasets demonstrate the effectiveness of HiGFA.