🤖 AI Summary
This work addresses the challenge in vision-language models (VLMs) of simultaneously achieving discriminative and generative capabilities for custom concepts, along with insufficient composability. We propose a composable custom token learning framework that jointly optimizes text inversion loss and classification loss using only a few images and textual definitions of parent classes, while enforcing regularization via a low-dimensional attribute embedding subspace. We introduce Generative-Aided Information Retrieval (GAIR), a novel paradigm enabling a single custom token to function effectively and coherently across classification, cross-modal retrieval, and text-to-image generation. On DeepFashion2, our method improves mean reciprocal rank (MRR) for text-to-image retrieval by 7%. It further supports interpretable composite query visualization and dynamic inference-time correction, significantly enhancing generalization and controllability for novel concepts.
📝 Abstract
This paper explores the possibility of learning custom tokens for representing new concepts in Vision-Language Models (VLMs). Our aim is to learn tokens that can be effective for both discriminative and generative tasks while composing well with words to form new input queries. The targeted concept is specified in terms of a small set of images and a parent concept described using text. We operate on CLIP text features and propose to use a combination of a textual inversion loss and a classification loss to ensure that text features of the learned token are aligned with image features of the concept in the CLIP embedding space. We restrict the learned token to a low-dimensional subspace spanned by tokens for attributes that are appropriate for the given super-class. These modifications improve the quality of compositions of the learned token with natural language for generating new scenes. Further, we show that learned custom tokens can be used to form queries for text-to-image retrieval task, and also have the important benefit that composite queries can be visualized to ensure that the desired concept is faithfully encoded. Based on this, we introduce the method of Generation Aided Image Retrieval, where the query is modified at inference time to better suit the search intent. On the DeepFashion2 dataset, our method improves Mean Reciprocal Retrieval (MRR) over relevant baselines by 7%.