🤖 AI Summary
In real-world hierarchical image classification, inconsistent annotation granularities (e.g., mixing “bird” and “bald eagle”) pose a fundamental challenge, whereas existing methods assume uniform fine-grained supervision. To address this, we propose a free-granularity learning framework—the first to systematically model heterogeneous granularity annotations across instances. We introduce ImageNet-F, a large-scale benchmark emulating human multi-level cognitive annotation behavior. Leveraging CLIP to capture semantic ambiguity, we generate pseudo-attributes via vision-language models to strengthen semantic guidance, and integrate semi-supervised learning to jointly optimize multi-granularity labels. Experiments demonstrate significant improvements in classification accuracy and robustness under mixed-granularity supervision. Our approach establishes a novel paradigm for weakly supervised hierarchical classification in practical settings, advancing beyond rigid granularity assumptions and enabling flexible, semantics-aware label utilization.
📝 Abstract
Hierarchical image classification predicts labels across a semantic taxonomy, but existing methods typically assume complete, fine-grained annotations, an assumption rarely met in practice. Real-world supervision varies in granularity, influenced by image quality, annotator expertise, and task demands; a distant bird may be labeled Bird, while a close-up reveals Bald eagle. We introduce ImageNet-F, a large-scale benchmark curated from ImageNet and structured into cognitively inspired basic, subordinate, and fine-grained levels. Using CLIP as a proxy for semantic ambiguity, we simulate realistic, mixed-granularity labels reflecting human annotation behavior. We propose free-grain learning, with heterogeneous supervision across instances. We develop methods that enhance semantic guidance via pseudo-attributes from vision-language models and visual guidance via semi-supervised learning. These, along with strong baselines, substantially improve performance under mixed supervision. Together, our benchmark and methods advance hierarchical classification under real-world constraints.