Less Detail, Better Answers: Degradation-Driven Prompting for VQA

📅 2026-04-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the susceptibility of vision-language models to hallucinations and reasoning errors caused by redundant details in high-resolution images. To mitigate this issue, the authors propose a degradation-driven prompting framework that strategically reduces image fidelity—through techniques such as downsampling, blur masking, and contrast enhancement—and integrates structured visual prompts, including white-background masks and orthogonal lines, to guide the model toward salient structural information. Tailored degradation and prompting strategies are designed for two distinct task categories: physical properties and perceptual phenomena, thereby realizing a “less-is-more” reasoning paradigm. Experimental results demonstrate that the proposed framework significantly improves accuracy across multiple challenging visual question answering benchmarks, effectively suppresses texture-induced distractions, and enhances model robustness against visual illusions and anomalies.
📝 Abstract
Recent advancements in Vision-Language Models (VLMs) have significantly pushed the boundaries of Visual Question Answering (VQA).However,high-resolution details can sometimes become noise that leads to hallucinations or reasoning errors. In this paper,we propose Degradation-Driven Prompting (DDP), a novel framework that improves VQA performance by strategically reducing image fidelity to force models to focus on essential structural information. We evaluate DDP across two distinct tasks. Physical attributes targets images prone to human misjudgment, where DDP employs a combination of 80p downsampling, structural visual aids (white background masks and orthometric lines), and In-Context Learning (ICL) to calibrate the model's focus. Perceptual phenomena addresses various machine-susceptible visual anomalies and illusions, including Visual Anomaly (VA), Color (CI), Motion(MI),Gestalt (GI), Geometric (GSI), and Visual Illusions (VI).For this task, DDP integrates a task-classification stage with specialized tools such as blur masks and contrast enhancement alongside downsampling. Our experimental results demonstrate that less is more: by intentionally degrading visual inputs and providing targeted structural prompts, DDP enables VLMs to bypass distracting textures and achieve superior reasoning accuracy on challenging visual benchmarks.
Problem

Research questions and friction points this paper is trying to address.

Visual Question Answering
Vision-Language Models
Image Detail Noise
Reasoning Errors
Visual Hallucination
Innovation

Methods, ideas, or system contributions that make the work stand out.

Degradation-Driven Prompting
Vision-Language Models
Visual Question Answering
Structural Prompting
Perceptual Illusions
🔎 Similar Papers
No similar papers found.