🤖 AI Summary
To address the performance limitations of CLIP in unsupervised multi-label image classification—stemming from its view dependency and inherent category bias—this paper proposes the first CAM-guided multi-view CLIP knowledge distillation framework. Methodologically, it leverages CLIP’s zero-shot capability to automatically localize target-relevant local regions via Class Activation Maps (CAMs), generating diverse multi-view cropped patches. A pseudo-labeling strategy combined with bias-correction mechanisms enables robust feature distillation without ground-truth annotations. The core innovation lies in integrating CAMs into the CLIP distillation pipeline to jointly preserve local discriminability and global semantic consistency, while systematically mitigating CLIP’s intrinsic category biases. Extensive experiments demonstrate substantial improvements over state-of-the-art unsupervised multi-label methods across multiple benchmark datasets. The source code is publicly available.
📝 Abstract
Multi-label classification is crucial for comprehensive image understanding, yet acquiring accurate annotations is challenging and costly. To address this, a recent study suggests exploiting unsupervised multi-label classification leveraging CLIP, a powerful vision-language model. Despite CLIP's proficiency, it suffers from view-dependent predictions and inherent bias, limiting its effectiveness. We propose a novel method that addresses these issues by leveraging multiple views near target objects, guided by Class Activation Mapping (CAM) of the classifier, and debiasing pseudo-labels derived from CLIP predictions. Our Classifier-guided CLIP Distillation (CCD) enables selecting multiple local views without extra labels and debiasing predictions to enhance classification performance. Experimental results validate our method's superiority over existing techniques across diverse datasets. The code is available at https://github.com/k0u-id/CCD.