Classifier-guided CLIP Distillation for Unsupervised Multi-label Classification

📅 2025-03-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the performance limitations of CLIP in unsupervised multi-label image classification—stemming from its view dependency and inherent category bias—this paper proposes the first CAM-guided multi-view CLIP knowledge distillation framework. Methodologically, it leverages CLIP’s zero-shot capability to automatically localize target-relevant local regions via Class Activation Maps (CAMs), generating diverse multi-view cropped patches. A pseudo-labeling strategy combined with bias-correction mechanisms enables robust feature distillation without ground-truth annotations. The core innovation lies in integrating CAMs into the CLIP distillation pipeline to jointly preserve local discriminability and global semantic consistency, while systematically mitigating CLIP’s intrinsic category biases. Extensive experiments demonstrate substantial improvements over state-of-the-art unsupervised multi-label methods across multiple benchmark datasets. The source code is publicly available.

Technology Category

Application Category

📝 Abstract
Multi-label classification is crucial for comprehensive image understanding, yet acquiring accurate annotations is challenging and costly. To address this, a recent study suggests exploiting unsupervised multi-label classification leveraging CLIP, a powerful vision-language model. Despite CLIP's proficiency, it suffers from view-dependent predictions and inherent bias, limiting its effectiveness. We propose a novel method that addresses these issues by leveraging multiple views near target objects, guided by Class Activation Mapping (CAM) of the classifier, and debiasing pseudo-labels derived from CLIP predictions. Our Classifier-guided CLIP Distillation (CCD) enables selecting multiple local views without extra labels and debiasing predictions to enhance classification performance. Experimental results validate our method's superiority over existing techniques across diverse datasets. The code is available at https://github.com/k0u-id/CCD.
Problem

Research questions and friction points this paper is trying to address.

Addresses view-dependent predictions in CLIP for classification
Reduces inherent bias in CLIP pseudo-labels
Enhances unsupervised multi-label image classification accuracy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Classifier-guided CLIP Distillation for multi-label classification
Uses CAM to select multiple local views
Debiases pseudo-labels from CLIP predictions
🔎 Similar Papers
No similar papers found.