Visually Consistent Hierarchical Image Classification

📅 2024-06-17
📈 Citations: 1
Influential: 1
📄 PDF
🤖 AI Summary
To address cross-granularity prediction errors in hierarchical image classification caused by visual inconsistency at test time, this paper proposes the first hierarchical classification paradigm grounded in *intra-image visual consistency*. Our method requires no external semantic supervision or pixel-level annotations; instead, it employs a self-supervised segmentation alignment mechanism to visually align fine-grained predictions with coarse-grained regions within the same image. By integrating multi-scale feature modeling with CLIP’s zero-shot transfer capability, the framework enforces both semantic and visual consistency. Evaluated on multiple hierarchical classification benchmarks, our approach significantly outperforms zero-shot CLIP and existing state-of-the-art methods, achieving higher classification accuracy and improved prediction coherence. Notably, it simultaneously enhances unsupervised image segmentation quality—thereby strengthening model interpretability and robustness—without additional supervision.

Technology Category

Application Category

📝 Abstract
Hierarchical classification predicts labels across multiple levels of a taxonomy, e.g., from coarse-level 'Bird' to mid-level 'Hummingbird' to fine-level 'Green hermit', allowing flexible recognition under varying visual conditions. It is commonly framed as multiple single-level tasks, but each level may rely on different visual cues: Distinguishing 'Bird' from 'Plant' relies on global features like feathers or leaves, while separating 'Anna's hummingbird' from 'Green hermit' requires local details such as head coloration. Prior methods improve accuracy using external semantic supervision, but such statistical learning criteria fail to ensure consistent visual grounding at test time, resulting in incorrect hierarchical classification. We propose, for the first time, to enforce internal visual consistency by aligning fine-to-coarse predictions through intra-image segmentation. Our method outperforms zero-shot CLIP and state-of-the-art baselines on hierarchical classification benchmarks, achieving both higher accuracy and more consistent predictions. It also improves internal image segmentation without requiring pixel-level annotations.
Problem

Research questions and friction points this paper is trying to address.

Ensures visual consistency in hierarchical image classification
Aligns fine-to-coarse predictions via intra-image segmentation
Improves accuracy without pixel-level annotations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Aligns fine-to-coarse predictions via segmentation
Ensures visual consistency without external supervision
Improves hierarchical classification and segmentation accuracy
🔎 Similar Papers
No similar papers found.