🤖 AI Summary
This work addresses the limited zero-shot recognition performance of large-scale vision-language models like CLIP on rare categories by proposing LiteEmbed, a lightweight few-shot personalization framework. Without modifying the CLIP backbone, LiteEmbed introduces subspace-guided optimization in the text embedding space, leveraging principal component analysis to disentangle coarse-grained semantic directions from fine-grained variations. It jointly optimizes two objectives—coarse alignment and fine separation—to balance semantic consistency and inter-class separability. The method enables plug-and-play fine-tuning of text embeddings and demonstrates consistent superiority over existing approaches across multiple tasks, including image classification, retrieval, segmentation, and detection. Notably, LiteEmbed significantly enhances CLIP’s adaptability to emerging or culturally specific rare categories.
📝 Abstract
Large-scale vision-language models such as CLIP achieve strong zero-shot recognition but struggle with classes that are rarely seen during pretraining, including newly emerging entities and culturally specific categories. We introduce LiteEmbed, a lightweight framework for few-shot personalization of CLIP that enables new classes to be added without retraining its encoders. LiteEmbed performs subspace-guided optimization of text embeddings within CLIP's vocabulary, leveraging a PCA-based decomposition that disentangles coarse semantic directions from fine-grained variations. Two complementary objectives, coarse alignment and fine separation, jointly preserve global semantic consistency while enhancing discriminability among visually similar classes. Once optimized, the embeddings are plug-and-play, seamlessly substituting CLIP's original text features across classification, retrieval, segmentation, and detection tasks. Extensive experiments demonstrate substantial gains over prior methods, establishing LiteEmbed as an effective approach for adapting CLIP to underrepresented, rare, or unseen classes.