GraphVLM: Benchmarking Vision Language Models for Multimodal Graph Learning

📅 2026-03-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limited reasoning capabilities of vision-language models (VLMs) on structured multimodal graph data by systematically proposing and evaluating three integration paradigms: VLM-as-Encoder, VLM-as-Aligner, and VLM-as-Predictor. Through the newly introduced GraphVLM benchmark and extensive experiments across six real-world datasets spanning multiple domains, the study demonstrates that all three paradigms effectively enhance the joint understanding and reasoning over multimodal entities within graph structures. Notably, the VLM-as-Predictor paradigm consistently achieves the most significant and stable performance gains. This research establishes a foundational framework and introduces novel paradigms for structured multimodal learning, paving the way for future advancements in graph-based multimodal reasoning.

Technology Category

Application Category

📝 Abstract
Vision-Language Models (VLMs) have demonstrated remarkable capabilities in aligning and understanding multimodal signals, yet their potential to reason over structured data, where multimodal entities are connected through explicit relational graphs, remains largely underexplored. Unlocking this capability is crucial for real-world applications such as social networks, recommendation systems, and scientific discovery, where multimodal information is inherently structured. To bridge this gap, we present GraphVLM, a systematic benchmark designed to evaluate and harness the capabilities of VLMs for multimodal graph learning (MMGL). GraphVLM investigates three complementary paradigms for integrating VLMs with graph reasoning: (1) VLM-as-Encoder, which enriches graph neural networks through multimodal feature fusion; (2) VLM-as-Aligner, which bridges modalities in latent or linguistic space to facilitate LLM-based structured reasoning; and (3) VLM-as-Predictor, which directly employs VLMs as multimodal backbones for graph learning tasks. Extensive experiments across six datasets from diverse domains demonstrate that VLMs enhance multimodal graph learning via all three roles. Among these paradigms, VLM-as-Predictor achieves the most substantial and consistent performance gains, revealing the untapped potential of vision-language models as a new foundation for multimodal graph learning. The benchmark code is publicly available at https://github.com/oamyjin/GraphVLM.
Problem

Research questions and friction points this paper is trying to address.

Vision-Language Models
Multimodal Graph Learning
Structured Data
Graph Reasoning
Multimodal Entities
Innovation

Methods, ideas, or system contributions that make the work stand out.

Vision-Language Models
Multimodal Graph Learning
Graph Neural Networks
Structured Reasoning
Benchmark
🔎 Similar Papers
No similar papers found.