Contra4: Evaluating Contrastive Cross-Modal Reasoning in Audio, Video, Image, and 3D

📅 2025-06-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses contrastive cross-modal reasoning—specifically, whether models can accurately identify the semantically most aligned candidate among image, audio, video, and 3D modalities given a natural language query. To this end, we introduce Contra4, the first four-modal benchmark for this task, comprising 2.3K human-verified samples. We propose a novel hybrid model round-trip consistency filtering mechanism to enhance annotation quality and design task-specific fine-tuning strategies to strengthen cross-modal alignment and discriminative capability. Experiments reveal that state-of-the-art models achieve only 42% accuracy in the four-modal setting, exposing fundamental limitations in their contrastive reasoning capacity. Our approach improves over strong baselines by 56%, demonstrating the effectiveness of both the proposed modeling framework and evaluation paradigm for contrastive cross-modal reasoning.

Technology Category

Application Category

📝 Abstract
Real-world decision-making often begins with identifying which modality contains the most relevant information for a given query. While recent multimodal models have made impressive progress in processing diverse inputs, it remains unclear whether they can reason contrastively across multiple modalities to select the one that best satisfies a natural language prompt. We argue this capability is foundational, especially in retrieval-augmented and decision-time contexts, where systems must evaluate multiple signals and identify which one conveys the relevant information. To evaluate this skill, we introduce Contra4, a dataset for contrastive cross-modal reasoning across four modalities: image, audio, video, and 3D. Each example presents a natural language question alongside multiple candidate modality instances, and the model must select the one that semantically aligns with the prompt. Contra4 combines human-annotated captions with a mixture-of-models round-trip-consistency filter to ensure high-quality supervision, resulting in 174k training examples and a manually verified test set of 2.3k samples. While task-specific fine-tuning improves performance by 56% relative to baseline, state-of-the-art models still achieve only 56% accuracy overall and 42% in four-modality settings, underscoring a significant limitation in current multimodal models.
Problem

Research questions and friction points this paper is trying to address.

Evaluating contrastive reasoning across audio, video, image, and 3D modalities
Assessing model ability to select relevant modality for natural language queries
Addressing limitations in multimodal models' cross-modal reasoning accuracy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Contra4 dataset for cross-modal reasoning
Human-annotated captions with model filtering
Evaluates contrastive selection across four modalities
🔎 Similar Papers
No similar papers found.