ABC: Achieving Better Control of Multimodal Embeddings using VLMs

📅 2025-03-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current CLIP-style models employ independent dual-tower architectures for image and text encoding, resulting in weak cross-modal interaction and limited fine-grained control over embeddings via user instructions. To address this, we propose the first instruction-driven deep vision-language joint encoder: built upon an end-to-end trained vision-language model (VLM) backbone, it introduces an instruction-aware feature fusion mechanism that enables precise, natural-language-guided modulation of visual representations. To support systematic evaluation, we construct CtrlBench—the first benchmark tailored for instruction-driven fine-grained retrieval—and publicly release both the model and dataset. Experiments demonstrate state-of-the-art performance on MSCOCO image-text retrieval among models of comparable scale; top-ranked results on classification and VQA tasks in the Massive Multimodal Embedding Benchmark; and substantial improvements in retrieval accuracy for ambiguous scenes and instruction-dependent queries.

Technology Category

Application Category

📝 Abstract
Visual embedding models excel at zero-shot tasks like visual retrieval and classification. However, these models cannot be used for tasks that contain ambiguity or require user instruction. These tasks necessitate a multimodal embedding model, which outputs embeddings that combine visual and natural language input. Existing CLIP-based approaches embed images and text independently, and fuse the result. We find that this results in weak interactions between modalities, and poor user control over the representation. We introduce ABC, an open-source multimodal embedding model that uses a vision-language model backbone to deeply integrate image features with natural language instructions. ABC achieves bestfor-size performance on MSCOCO image-to-text retrieval and is the top performing model on classification and VQA tasks in the Massive Multimodal Embedding Benchmark. With a strongly unified vision-language representation, ABC can use natural language to solve subtle and potentially ambiguous visual retrieval problems. To evaluate this capability, we design CtrlBench, a benchmark that requires interleaving textual instructions with image content for correct retrieval. ABC advances the state of multimodal embeddings by offering high-quality representations and flexible natural language control. Our model and datasets are available at our project page.
Problem

Research questions and friction points this paper is trying to address.

Weak interaction between visual and text modalities in existing models.
Lack of user control over multimodal embedding representations.
Difficulty in handling ambiguous or instruction-based visual retrieval tasks.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Deep integration of image and text features
Open-source multimodal embedding model ABC
Natural language control for visual retrieval
🔎 Similar Papers
No similar papers found.