🤖 AI Summary
Weak spatial reasoning, inaccurate visual localization, and frequent hallucinations plague multimodal large language models (MLLMs). To address these issues, this paper proposes ByDeWay—a parameter-free, plug-and-play enhancement framework requiring no fine-tuning or additional training. Its core innovation is a Layered Depth Prompting (LDP) strategy: leveraging monocular depth estimation to partition images into near-, mid-, and far-field regions, then generating region-specific captions to explicitly encode spatial hierarchy; concurrently, grounded vision-language models and modular prompt engineering are integrated to strengthen vision-language alignment. Evaluated on benchmarks including POPE and GQA, ByDeWay significantly reduces hallucination rates—by up to 32.1%—and improves spatial reasoning accuracy. It demonstrates strong generalization and robustness across diverse black-box MLLMs, such as Qwen-VL, LLaVA, and MiniGPT-4.
📝 Abstract
We introduce ByDeWay, a training-free framework designed to enhance the performance of Multimodal Large Language Models (MLLMs). ByDeWay uses a novel prompting strategy called Layered-Depth-Based Prompting (LDP), which improves spatial reasoning and grounding without modifying any model parameters. It segments the scene into closest, mid-range, and farthest layers using monocular depth estimation, then generates region-specific captions with a grounded vision-language model. These structured, depth-aware captions are appended to the image-question prompt, enriching it with spatial context. This guides MLLMs to produce more grounded and less hallucinated responses. Our method is lightweight, modular, and compatible with black-box MLLMs. Experiments on hallucination-sensitive (POPE) and reasoning-intensive (GQA) benchmarks show consistent improvements across multiple MLLMs, validating the effectiveness of depth-aware prompting in a zero-training setting.