Textual Inversion for Efficient Adaptation of Open-Vocabulary Object Detectors Without Forgetting

📅 2025-08-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address catastrophic forgetting of natural language querying and zero-shot capabilities in vision-language models (VLMs) during fine-tuning for open-vocabulary object detection, this paper proposes an efficient adaptation method based on textual inversion. Our approach optimizes only the token embeddings corresponding to newly introduced categories while freezing all other model parameters; gradient computation and storage are strictly confined to the embedding layer, drastically reducing computational overhead. The learned embeddings remain fully compatible with the original model weights, preserving zero-shot transferability. Experiments demonstrate that our method matches or surpasses full-parameter fine-tuning baselines across multiple benchmarks, effectively mitigates capability degradation, and supports cross-domain generalization (e.g., from real images to sketches). The core contribution is the first successful adaptation of the textual inversion paradigm to object detection—enabling open-vocabulary detection with minimal parameter updates while maintaining robust zero-shot performance.

Technology Category

Application Category

📝 Abstract
Recent progress in large pre-trained vision language models (VLMs) has reached state-of-the-art performance on several object detection benchmarks and boasts strong zero-shot capabilities, but for optimal performance on specific targets some form of finetuning is still necessary. While the initial VLM weights allow for great few-shot transfer learning, this usually involves the loss of the original natural language querying and zero-shot capabilities. Inspired by the success of Textual Inversion (TI) in personalizing text-to-image diffusion models, we propose a similar formulation for open-vocabulary object detection. TI allows extending the VLM vocabulary by learning new or improving existing tokens to accurately detect novel or fine-grained objects from as little as three examples. The learned tokens are completely compatible with the original VLM weights while keeping them frozen, retaining the original model's benchmark performance, and leveraging its existing capabilities such as zero-shot domain transfer (e.g., detecting a sketch of an object after training only on real photos). The storage and gradient calculations are limited to the token embedding dimension, requiring significantly less compute than full-model fine-tuning. We evaluated whether the method matches or outperforms the baseline methods that suffer from forgetting in a wide variety of quantitative and qualitative experiments.
Problem

Research questions and friction points this paper is trying to address.

Adapts open-vocabulary object detectors efficiently without forgetting
Learns new tokens to detect novel objects from few examples
Retains original model performance with minimal compute requirements
Innovation

Methods, ideas, or system contributions that make the work stand out.

Textual Inversion for vocabulary extension
Frozen VLM weights with token learning
Efficient compute via embedding dimension limits
🔎 Similar Papers
No similar papers found.