🤖 AI Summary
Existing training-free visual prompting methods treat image objects as isolated entities, failing to model their spatial relationships and thereby limiting the zero-shot spatial reasoning capabilities of multimodal language models. To address this limitation, this work proposes Graph-of-Mark (GoM), which, for the first time, overlays scene graphs onto input images as pixel-level visual prompts. By explicitly encoding relative positional and directional relationships between objects, GoM guides the model to better understand spatial layouts. The approach integrates object region annotations with auxiliary graph-descriptive text, enhancing the spatial reasoning abilities of existing multimodal language models without any training. Experiments across three open-source models and four datasets demonstrate that GoM improves zero-shot accuracy by up to 11 percentage points on visual question answering and localization tasks.
📝 Abstract
Recent advances in training-free visual prompting, such as Set-of-Mark, have emerged as a promising direction for enhancing the grounding capabilities of multimodal language models (MLMs). These techniques operate by partitioning the input image into object regions and annotating them with marks, predominantly boxes with numeric identifiers, before feeding the augmented image to the MLM. However, these approaches treat marked objects as isolated entities, failing to capture the relationships between them. On these premises, we propose Graph-of-Mark (GoM), the first pixel-level visual prompting technique that overlays scene graphs onto the input image for spatial reasoning tasks. We evaluate GoM across 3 open-source MLMs and 4 different datasets, conducting extensive ablations on drawn components and investigating the impact of auxiliary graph descriptions in the text prompt. Our results demonstrate that GoM consistently improves the zero-shot capability of MLMs in interpreting object positions and relative directions, improving base accuracy in visual question answering and localization up to 11 percentage points.