🤖 AI Summary
Multimodal large language models (MLLMs) exhibit significant deficiencies in visual object matching tasks. Method: This work systematically identifies eight categories of matching cues where MLLMs consistently underperform and introduces the first fine-grained, multi-source, human-annotated Multimodal Visual Matching (MMVM) benchmark—comprising 15 diverse data sources, video samples, and rationale-based annotations. We propose CoLVA, a novel model featuring an object-level fine-grained visual encoder and an instruction-enhanced contrastive learning framework, supported by an automated annotation pipeline and multi-source fusion strategy to construct a high-quality 220K-sample supervised fine-tuning (SFT) dataset. Contribution/Results: On MMVM, CoLVA achieves 51.06% accuracy—outperforming GPT-4o by 8.41 percentage points. All code, the MMVM benchmark, the SFT dataset, and the CoLVA model are fully open-sourced.
📝 Abstract
Recent advancements in multimodal models have shown a strong ability in visual perception, reasoning abilities, and vision-language understanding. However, studies on visual matching ability are missing, where finding the visual correspondence of objects is essential in vision research. Our research reveals that the matching capabilities in recent multimodal LLMs (MLLMs) still exhibit systematic shortcomings, even with current strong MLLMs models, GPT-4o. In particular, we construct a Multimodal Visual Matching (MMVM) benchmark to fairly benchmark over 30 different MLLMs. The MMVM benchmark is built from 15 open-source datasets and Internet videos with manual annotation. We categorize the data samples of MMVM benchmark into eight aspects based on the required cues and capabilities to more comprehensively evaluate and analyze current MLLMs. In addition, we have designed an automatic annotation pipeline to generate the MMVM SFT dataset, including 220K visual matching data with reasoning annotation. Finally, we present CoLVA, a novel contrastive MLLM with two novel technical designs: fine-grained vision expert with object-level contrastive learning and instruction augmentation strategy. CoLVA achieves 51.06% overall accuracy (OA) on the MMVM benchmark, surpassing GPT-4o and baseline by 8.41% and 23.58% OA, respectively. The results show the effectiveness of our MMVM SFT dataset and our novel technical designs. Code, benchmark, dataset, and models are available at https://github.com/zhouyiks/CoLVA.