Visual Contexts Clarify Ambiguous Expressions: A Benchmark Dataset

📅 2024-11-21
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of intent disambiguation for multimodal AI under visual contextual guidance, particularly for ambiguous or indirect linguistic expressions. To bridge the gap in modeling context-dependent implicit intent, we introduce VAGUE—the first vision-language benchmark specifically designed for evaluating intent interpretation from vague/indirect utterances, comprising 3.9K utterance–scene pairs. Methodologically, we propose an image-driven prompt-solution generation pipeline, integrating multimodal prompting techniques with a systematic Vision-Language Model (VLM) evaluation framework, and develop an automated data synthesis pipeline. Experiments reveal that state-of-the-art VLMs exhibit significant limitations in complex contextual reasoning tasks. All code, data, and evaluation tools are publicly released, establishing a new, reproducible benchmark for intent understanding research in natural human–AI interaction.

Technology Category

Application Category

📝 Abstract
The ability to perform complex reasoning across multimodal inputs is essential for models to effectively interact with humans in real-world scenarios. Advancements in vision-language models have significantly improved performance on tasks that require processing explicit and direct textual inputs, such as Visual Question Answering (VQA) and Visual Grounding (VG). However, less attention has been given to improving the model capabilities to comprehend nuanced and ambiguous forms of communication. This presents a critical challenge, as human language in real-world interactions often convey hidden intentions that rely on context for accurate interpretation. To address this gap, we propose VAGUE, a multimodal benchmark comprising 3.9K indirect human utterances paired with corresponding scenes. Additionally, we contribute a model-based pipeline for generating prompt-solution pairs from input images. Our work aims to delve deeper into the ability of models to understand indirect communication and seek to contribute to the development of models capable of more refined and human-like interactions. Extensive evaluation on multiple VLMs reveals that mainstream models still struggle with indirect communication when required to perform complex linguistic and visual reasoning. We release our code and data at https://github.com/Hazel-Heejeong-Nam/VAGUE.git.
Problem

Research questions and friction points this paper is trying to address.

AI systems struggle with multimodal reasoning for intent disambiguation.
VAGUE benchmark evaluates AI's ability to integrate visual context.
Current models fail to distinguish true intent from visual correlations.
Innovation

Methods, ideas, or system contributions that make the work stand out.

VAGUE benchmark evaluates multimodal AI systems
Integrates visual context for intent disambiguation
Includes 1.6K ambiguous expressions with images
🔎 Similar Papers
No similar papers found.