Infusing fine-grained visual knowledge to Vision-Language Models

📅 2025-08-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address catastrophic forgetting in fine-grained visual retrieval fine-tuning—where large-scale contrastive vision-language models degrade their general cross-modal capabilities—we propose a text-free, efficient regularization-based fine-tuning framework. Our method integrates continual learning principles with robust validation set design: (i) knowledge preservation regularization, (ii) selective fine-tuning of the visual encoder only, (iii) fine-grained hyperparameter optimization, and (iv) construction of cross-domain, reproducible validation sets—relying solely on image-side signals to maintain image–text alignment. Unlike prior approaches, our framework requires no text encoder updates or auxiliary textual annotations. Evaluated on both fine-grained and coarse-grained image–text retrieval benchmarks, it achieves state-of-the-art performance while preserving model generality and enabling domain adaptation. This work demonstrates that high-fidelity alignment can be retained through vision-only supervision, advancing efficient and scalable multimodal adaptation.

Technology Category

Application Category

📝 Abstract
Large-scale contrastive pre-training produces powerful Vision-and-Language Models (VLMs) capable of generating representations (embeddings) effective for a wide variety of visual and multimodal tasks. However, these pretrained embeddings remain suboptimal for fine-grained open-set visual retrieval, where state-of-the-art results require fine-tuning the vision encoder using annotated domain-specific samples. Naively performing such fine-tuning typically leads to catastrophic forgetting, severely diminishing the model's general-purpose visual and cross-modal capabilities. In this work, we propose a fine-tuning method explicitly designed to achieve optimal balance between fine-grained domain adaptation and retention of the pretrained VLM's broad multimodal knowledge. Drawing inspiration from continual learning literature, we systematically analyze standard regularization techniques aimed at knowledge retention and propose an efficient and effective combination strategy. Additionally, we address the commonly overlooked yet critical aspects of validation set design and hyperparameter tuning to ensure reproducibility and robust generalization across datasets and pretrained models. We extensively evaluate our method on both fine-grained and coarse-grained image-image and image-text retrieval benchmarks. Our approach consistently achieves strong results, notably retaining the visual-text alignment without utilizing any text data or the original text encoder during fine-tuning. Code and model checkpoints: https://github.com/nikosips/infusing .
Problem

Research questions and friction points this paper is trying to address.

Addressing catastrophic forgetting in fine-grained visual retrieval fine-tuning
Balancing domain adaptation with retention of multimodal knowledge
Improving generalization across datasets without text data usage
Innovation

Methods, ideas, or system contributions that make the work stand out.

Fine-tuning method balances domain adaptation and knowledge retention
Combines regularization techniques inspired by continual learning
Achieves strong retrieval without text data during fine-tuning
🔎 Similar Papers
No similar papers found.
Nikolaos-Antonios Ypsilantis
Nikolaos-Antonios Ypsilantis
Czech Technical University in Prague
Computer VisionMetric LearningImage Retrieval
K
Kaifeng Chen
Google DeepMind
A
André Araujo
Google DeepMind
O
Ondřej Chum
VRG, FEE, Czech Technical University in Prague