🤖 AI Summary
Personalized vision-language retrieval requires identifying user-specific concepts from extremely few examples (e.g., “my dog Fido”), posing a key challenge in jointly modeling personal and general knowledge for robust cross-scenario retrieval. To address this, we propose: (1) a regularized low-rank parameter update mechanism that efficiently adapts the final layer of the language encoder to personalized concepts; (2) a novel parameter stacking strategy enabling seamless integration of multiple personalized concepts; and (3) a new retrieval fidelity metric based on VLM-generated descriptive captions. Our method builds upon a LoRA-enhanced dual-encoder architecture integrated with text inversion contrastive learning. Evaluated on DeepFashion2 and ConCon-Chi benchmarks, it achieves 4–22 percentage point improvements in personalized retrieval accuracy, establishing new state-of-the-art performance.
📝 Abstract
Personalized vision-language retrieval seeks to recognize new concepts (e.g."my dog Fido") from only a few examples. This task is challenging because it requires not only learning a new concept from a few images, but also integrating the personal and general knowledge together to recognize the concept in different contexts. In this paper, we show how to effectively adapt the internal representation of a vision-language dual encoder model for personalized vision-language retrieval. We find that regularized low-rank adaption of a small set of parameters in the language encoder's final layer serves as a highly effective alternative to textual inversion for recognizing the personal concept while preserving general knowledge. Additionally, we explore strategies for combining parameters of multiple learned personal concepts, finding that parameter addition is effective. To evaluate how well general knowledge is preserved in a finetuned representation, we introduce a metric that measures image retrieval accuracy based on captions generated by a vision language model (VLM). Our approach achieves state-of-the-art accuracy on two benchmarks for personalized image retrieval with natural language queries - DeepFashion2 and ConCon-Chi - outperforming the prior art by 4%-22% on personal retrievals.