🤖 AI Summary
To address catastrophic forgetting in fine-grained visual retrieval fine-tuning—where large-scale contrastive vision-language models degrade their general cross-modal capabilities—we propose a text-free, efficient regularization-based fine-tuning framework. Our method integrates continual learning principles with robust validation set design: (i) knowledge preservation regularization, (ii) selective fine-tuning of the visual encoder only, (iii) fine-grained hyperparameter optimization, and (iv) construction of cross-domain, reproducible validation sets—relying solely on image-side signals to maintain image–text alignment. Unlike prior approaches, our framework requires no text encoder updates or auxiliary textual annotations. Evaluated on both fine-grained and coarse-grained image–text retrieval benchmarks, it achieves state-of-the-art performance while preserving model generality and enabling domain adaptation. This work demonstrates that high-fidelity alignment can be retained through vision-only supervision, advancing efficient and scalable multimodal adaptation.
📝 Abstract
Large-scale contrastive pre-training produces powerful Vision-and-Language Models (VLMs) capable of generating representations (embeddings) effective for a wide variety of visual and multimodal tasks. However, these pretrained embeddings remain suboptimal for fine-grained open-set visual retrieval, where state-of-the-art results require fine-tuning the vision encoder using annotated domain-specific samples. Naively performing such fine-tuning typically leads to catastrophic forgetting, severely diminishing the model's general-purpose visual and cross-modal capabilities.
In this work, we propose a fine-tuning method explicitly designed to achieve optimal balance between fine-grained domain adaptation and retention of the pretrained VLM's broad multimodal knowledge. Drawing inspiration from continual learning literature, we systematically analyze standard regularization techniques aimed at knowledge retention and propose an efficient and effective combination strategy. Additionally, we address the commonly overlooked yet critical aspects of validation set design and hyperparameter tuning to ensure reproducibility and robust generalization across datasets and pretrained models. We extensively evaluate our method on both fine-grained and coarse-grained image-image and image-text retrieval benchmarks. Our approach consistently achieves strong results, notably retaining the visual-text alignment without utilizing any text data or the original text encoder during fine-tuning. Code and model checkpoints: https://github.com/nikosips/infusing .