Graph-of-Mark: Promote Spatial Reasoning in Multimodal Language Models with Graph-Based Visual Prompting

📅 2026-03-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing training-free visual prompting methods treat image objects as isolated entities, failing to model their spatial relationships and thereby limiting the zero-shot spatial reasoning capabilities of multimodal language models. To address this limitation, this work proposes Graph-of-Mark (GoM), which, for the first time, overlays scene graphs onto input images as pixel-level visual prompts. By explicitly encoding relative positional and directional relationships between objects, GoM guides the model to better understand spatial layouts. The approach integrates object region annotations with auxiliary graph-descriptive text, enhancing the spatial reasoning abilities of existing multimodal language models without any training. Experiments across three open-source models and four datasets demonstrate that GoM improves zero-shot accuracy by up to 11 percentage points on visual question answering and localization tasks.

Technology Category

Application Category

📝 Abstract
Recent advances in training-free visual prompting, such as Set-of-Mark, have emerged as a promising direction for enhancing the grounding capabilities of multimodal language models (MLMs). These techniques operate by partitioning the input image into object regions and annotating them with marks, predominantly boxes with numeric identifiers, before feeding the augmented image to the MLM. However, these approaches treat marked objects as isolated entities, failing to capture the relationships between them. On these premises, we propose Graph-of-Mark (GoM), the first pixel-level visual prompting technique that overlays scene graphs onto the input image for spatial reasoning tasks. We evaluate GoM across 3 open-source MLMs and 4 different datasets, conducting extensive ablations on drawn components and investigating the impact of auxiliary graph descriptions in the text prompt. Our results demonstrate that GoM consistently improves the zero-shot capability of MLMs in interpreting object positions and relative directions, improving base accuracy in visual question answering and localization up to 11 percentage points.
Problem

Research questions and friction points this paper is trying to address.

spatial reasoning
multimodal language models
visual prompting
object relationships
scene understanding
Innovation

Methods, ideas, or system contributions that make the work stand out.

Graph-of-Mark
visual prompting
spatial reasoning
multimodal language models
scene graph
🔎 Similar Papers
No similar papers found.
G
Giacomo Frisoni
Department of Computer Science and Engineering, University of Bologna
L
Lorenzo Molfetta
Department of Computer Science and Engineering, University of Bologna
M
Mattia Buzzoni
Department of Computer Science and Engineering, University of Bologna
Gianluca Moro
Gianluca Moro
Dept. of Computer Science and Engineering - University of Bologna, Cesena
natural language processingdata sciencedata miningmachine learningsensor networks agents peer-to-peer systems