MLaGA: Multimodal Large Language and Graph Assistant

📅 2025-06-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing LLM-based graph methods are largely confined to text-rich graphs and struggle to model multimodal graphs containing heterogeneous node attributes (e.g., text and images). This paper introduces the first large language model framework specifically designed for multimodal graph-structured data. Our approach addresses the problem through three key innovations: (1) a structure-aware multimodal encoder that jointly encodes graph topology and cross-modal node attributes; (2) a joint pretraining objective that explicitly incorporates graph structural priors; and (3) a lightweight multimodal instruction fine-tuning mechanism with projection adapters. Evaluated on multiple multimodal graph benchmarks, our method achieves state-of-the-art performance in both supervised learning and cross-domain transfer tasks, significantly outperforming existing unimodal and multimodal graph models.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) have demonstrated substantial efficacy in advancing graph-structured data analysis. Prevailing LLM-based graph methods excel in adapting LLMs to text-rich graphs, wherein node attributes are text descriptions. However, their applications to multimodal graphs--where nodes are associated with diverse attribute types, such as texts and images--remain underexplored, despite their ubiquity in real-world scenarios. To bridge the gap, we introduce the Multimodal Large Language and Graph Assistant (MLaGA), an innovative model that adeptly extends LLM capabilities to facilitate reasoning over complex graph structures and multimodal attributes. We first design a structure-aware multimodal encoder to align textual and visual attributes within a unified space through a joint graph pre-training objective. Subsequently, we implement a multimodal instruction-tuning approach to seamlessly integrate multimodal features and graph structures into the LLM through lightweight projectors. Extensive experiments across multiple datasets demonstrate the effectiveness of MLaGA compared to leading baseline methods, achieving superior performance in diverse graph learning tasks under both supervised and transfer learning scenarios.
Problem

Research questions and friction points this paper is trying to address.

Extending LLMs to analyze multimodal graphs with diverse attributes
Aligning textual and visual attributes in a unified graph space
Integrating multimodal features and graph structures into LLMs effectively
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multimodal encoder aligns text and visuals
Instruction-tuning integrates features into LLM
Lightweight projectors enhance graph learning tasks
🔎 Similar Papers
2024-10-09Conference on Empirical Methods in Natural Language ProcessingCitations: 0