Accounting for Focus Ambiguity in Visual Questions

📅 2025-01-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In visual question answering (VQA), spatial ambiguity—where questions refer to objects with multiple plausible locations in an image—frequently leads to model misinterpretation. To address this, we introduce VQ-FocusAmbiguity, the first VQA benchmark explicitly designed to evaluate focus-level visual ambiguity. We formally model question-level visual ambiguity and define two novel tasks: ambiguity detection and multi-region visual grounding—distinct from conventional answer grounding. The dataset is constructed via expert-annotated fine-grained object segmentation and precise question-to-region alignment, establishing a rigorous evaluation standard. Extensive experiments on state-of-the-art models—including BLIP-2 and LLaVA—reveal substantial performance gaps between models and human annotators on both ambiguity identification and localization, confirming the dataset’s challenge. All data, annotation guidelines, and an online evaluation platform are publicly released to foster reproducible research.

Technology Category

Application Category

📝 Abstract
No existing work on visual question answering explicitly accounts for ambiguity regarding where the content described in the question is located in the image. To fill this gap, we introduce VQ-FocusAmbiguity, the first VQA dataset that visually grounds each region described in the question that is necessary to arrive at the answer. We then provide an analysis showing how our dataset for visually grounding `questions' is distinct from visually grounding `answers', and characterize the properties of the questions and segmentations provided in our dataset. Finally, we benchmark modern models for two novel tasks: recognizing whether a visual question has focus ambiguity and localizing all plausible focus regions within the image. Results show that the dataset is challenging for modern models. To facilitate future progress on these tasks, we publicly share the dataset with an evaluation server at https://focusambiguity.github.io/.
Problem

Research questions and friction points this paper is trying to address.

Visual Question Answering
Ambiguity Resolution
Element Localization
Innovation

Methods, ideas, or system contributions that make the work stand out.

VQ-FocusAmbiguity dataset
location ambiguity annotation
multi-focus positioning challenge
🔎 Similar Papers
No similar papers found.