GRAPHGPT-O: Synergistic Multimodal Comprehension and Generation on Graphs

📅 2025-02-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the challenge of jointly modeling graph topology and multimodal semantics (text and images) in Multimodal Attribute Graphs (MMAGs) using Multimodal Large Language Models (MLLMs). To this end, we propose the first end-to-end understanding and generation framework specifically designed for MMAGs. Our method introduces three key innovations: (1) a graph-structure linearization variant that losslessly encodes topological relationships into token sequences compatible with MLLMs; (2) a hierarchical graph aligner that achieves cross-modal semantic alignment at both node-level and subgraph-level; and (3) a text-image interleaved generation inference mechanism enabling graph-aware, multi-step, multimodal generation. Evaluated on three cross-domain MMAG benchmarks, our approach significantly outperforms prior methods in both understanding and generation tasks. The code and datasets are publicly released.

Technology Category

Application Category

📝 Abstract
The rapid development of Multimodal Large Language Models (MLLMs) has enabled the integration of multiple modalities, including texts and images, within the large language model (LLM) framework. However, texts and images are usually interconnected, forming a multimodal attributed graph (MMAG). It is underexplored how MLLMs can incorporate the relational information ( extit{i.e.}, graph structure) and semantic information ( extit{i.e.,} texts and images) on such graphs for multimodal comprehension and generation. In this paper, we propose GraphGPT-o, which supports omni-multimodal understanding and creation on MMAGs. We first comprehensively study linearization variants to transform semantic and structural information as input for MLLMs. Then, we propose a hierarchical aligner that enables deep graph encoding, bridging the gap between MMAGs and MLLMs. Finally, we explore the inference choices, adapting MLLM to interleaved text and image generation in graph scenarios. Extensive experiments on three datasets from different domains demonstrate the effectiveness of our proposed method. Datasets and codes will be open-sourced upon acceptance.
Problem

Research questions and friction points this paper is trying to address.

Integrating graph structure into MLLMs
Enhancing multimodal understanding and generation
Bridging gap between MMAGs and MLLMs
Innovation

Methods, ideas, or system contributions that make the work stand out.

GraphGPT-o for multimodal graphs
Hierarchical aligner for deep encoding
Interleaved text-image generation adaptation
🔎 Similar Papers
No similar papers found.