Graph-MLLM: Harnessing Multimodal Large Language Models for Multimodal Graph Learning

📅 2025-06-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Multimodal large language models (MLLMs) excel at cross-modal understanding but neglect structural relationships among data instances; integrating multimodality with graph-structured representations—termed multimodal graphs (MMGs)—is critical for applications such as social network analysis and healthcare. Existing MMG approaches are fragmented across three paradigms—MLLM-as-Encoder, MLLM-as-Aligner, and MLLM-as-Predictor—and lack a standardized evaluation benchmark. Method: We introduce Graph-MLLM, the first standardized benchmark for MMG learning, comprising six diverse datasets and multiple downstream tasks. Contributions/Results: (1) We establish the first unified evaluation framework for MMG learning; (2) Empirical results demonstrate that joint image-text modeling outperforms unimodal inputs, and text-based visual descriptions significantly boost performance; (3) Fine-tuning only the MLLM—without explicit graph-structure encoding—achieves state-of-the-art results. We open-source a fully reproducible codebase to foster fair, efficient MMG research.

Technology Category

Application Category

📝 Abstract
Multimodal Large Language Models (MLLMs) have demonstrated remarkable capabilities in representing and understanding diverse modalities. However, they typically focus on modality alignment in a pairwise manner while overlooking structural relationships across data points. Integrating multimodality with structured graph information (i.e., multimodal graphs, MMGs) is essential for real-world applications such as social networks, healthcare, and recommendation systems. Existing MMG learning methods fall into three paradigms based on how they leverage MLLMs: Encoder, Aligner, and Predictor. MLLM-as-Encoder focuses on enhancing graph neural networks (GNNs) via multimodal feature fusion; MLLM-as-Aligner aligns multimodal attributes in language or hidden space to enable LLM-based graph reasoning; MLLM-as-Predictor treats MLLMs as standalone reasoners with in-context learning or fine-tuning. Despite their advances, the MMG field lacks a unified benchmark to fairly evaluate across these approaches, making it unclear what progress has been made. To bridge this gap, we present Graph-MLLM, a comprehensive benchmark for multimodal graph learning by systematically evaluating these three paradigms across six datasets with different domains. Through extensive experiments, we observe that jointly considering the visual and textual attributes of the nodes benefits graph learning, even when using pre-trained text-to-image alignment models (e.g., CLIP) as encoders. We also find that converting visual attributes into textual descriptions further improves performance compared to directly using visual inputs. Moreover, we observe that fine-tuning MLLMs on specific MMGs can achieve state-of-the-art results in most scenarios, even without explicit graph structure information. We hope that our open-sourced library will facilitate rapid, equitable evaluation and inspire further innovative research in this field.
Problem

Research questions and friction points this paper is trying to address.

Integrating multimodality with structured graph information for real-world applications
Lacking unified benchmark to evaluate multimodal graph learning approaches
Improving graph learning by jointly considering visual and textual attributes
Innovation

Methods, ideas, or system contributions that make the work stand out.

Integrates multimodal graphs with MLLMs for learning
Converts visual attributes to text for better performance
Fine-tunes MLLMs on MMGs for state-of-the-art results
🔎 Similar Papers
No similar papers found.