Self-Enhanced Image Clustering with Cross-Modal Semantic Consistency

📅 2025-08-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing image clustering methods typically freeze pre-trained encoders, leading to a semantic misalignment between learned representations and clustering objectives. To address this, we propose a cross-modal semantic consistency self-augmentation framework built upon vision-language pre-trained models (e.g., CLIP). Our method enables end-to-end fine-tuning of the encoder via pseudo-label guidance, a lightweight clustering head, and dynamically balanced regularization. We further introduce a semantic alignment mechanism that jointly optimizes dynamic cluster center generation and sample assignment distribution. By breaking the frozen-encoder bottleneck, our approach achieves state-of-the-art performance on six benchmark datasets. Remarkably, using only ViT-B/32, it matches or surpasses prior methods relying on the larger ViT-L/14 backbone—demonstrating both the efficacy and generalizability of cross-modal semantic guidance for unsupervised image clustering.

Technology Category

Application Category

📝 Abstract
While large language-image pre-trained models like CLIP offer powerful generic features for image clustering, existing methods typically freeze the encoder. This creates a fundamental mismatch between the model's task-agnostic representations and the demands of a specific clustering task, imposing a ceiling on performance. To break this ceiling, we propose a self-enhanced framework based on cross-modal semantic consistency for efficient image clustering. Our framework first builds a strong foundation via Cross-Modal Semantic Consistency and then specializes the encoder through Self-Enhancement. In the first stage, we focus on Cross-Modal Semantic Consistency. By mining consistency between generated image-text pairs at the instance, cluster assignment, and cluster center levels, we train lightweight clustering heads to align with the rich semantics of the pre-trained model. This alignment process is bolstered by a novel method for generating higher-quality cluster centers and a dynamic balancing regularizer to ensure well-distributed assignments. In the second stage, we introduce a Self-Enhanced fine-tuning strategy. The well-aligned model from the first stage acts as a reliable pseudo-label generator. These self-generated supervisory signals are then used to feed back the efficient, joint optimization of the vision encoder and clustering heads, unlocking their full potential. Extensive experiments on six mainstream datasets show that our method outperforms existing deep clustering methods by significant margins. Notably, our ViT-B/32 model already matches or even surpasses the accuracy of state-of-the-art methods built upon the far larger ViT-L/14.
Problem

Research questions and friction points this paper is trying to address.

Mismatch between task-agnostic features and clustering demands
Improving image clustering via cross-modal semantic consistency
Enhancing encoder specialization through self-generated pseudo-labels
Innovation

Methods, ideas, or system contributions that make the work stand out.

Cross-Modal Semantic Consistency aligns clustering with pre-trained semantics
Self-Enhancement fine-tunes encoder using pseudo-labels for optimization
Dynamic balancing regularizer ensures well-distributed cluster assignments
🔎 Similar Papers