Abstractive Visual Understanding of Multi-modal Structured Knowledge: A New Perspective for MLLM Evaluation

📅 2025-06-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing multimodal large language model (MLLM) evaluation benchmarks largely overlook models’ capacity for abstract visual understanding of structured knowledge—particularly knowledge graph subgraphs. Method: We introduce M3STR, the first benchmark systematically assessing MLLMs on multimodal entity recognition and complex relational topological reasoning. Our approach establishes a novel evaluation paradigm tailored to abstract visual comprehension of structured knowledge, designs a structure-guided multimodal knowledge graph–based image synthesis technique to ensure precise alignment between relational topology and visual representations, and implements an end-to-end automated pipeline for benchmark construction. Contribution/Results: Zero-shot evaluation across 26 state-of-the-art MLLMs reveals substantial deficiencies in abstract structural visual reasoning. All data, annotations, and code are publicly released to advance the development of symbol-perception joint reasoning capabilities in MLLMs.

Technology Category

Application Category

📝 Abstract
Multi-modal large language models (MLLMs) incorporate heterogeneous modalities into LLMs, enabling a comprehensive understanding of diverse scenarios and objects. Despite the proliferation of evaluation benchmarks and leaderboards for MLLMs, they predominantly overlook the critical capacity of MLLMs to comprehend world knowledge with structured abstractions that appear in visual form. To address this gap, we propose a novel evaluation paradigm and devise M3STR, an innovative benchmark grounded in the Multi-Modal Map for STRuctured understanding. This benchmark leverages multi-modal knowledge graphs to synthesize images encapsulating subgraph architectures enriched with multi-modal entities. M3STR necessitates that MLLMs not only recognize the multi-modal entities within the visual inputs but also decipher intricate relational topologies among them. We delineate the benchmark's statistical profiles and automated construction pipeline, accompanied by an extensive empirical analysis of 26 state-of-the-art MLLMs. Our findings reveal persistent deficiencies in processing abstractive visual information with structured knowledge, thereby charting a pivotal trajectory for advancing MLLMs' holistic reasoning capacities. Our code and data are released at https://github.com/zjukg/M3STR
Problem

Research questions and friction points this paper is trying to address.

Evaluating MLLMs' structured visual knowledge comprehension
Assessing multi-modal entity and relation recognition
Addressing abstractive visual information processing gaps
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-modal knowledge graphs synthesize images
M3STR benchmark tests structured visual understanding
Automated pipeline evaluates 26 MLLMs' reasoning
🔎 Similar Papers
No similar papers found.
Y
Yichi Zhang
Zhejiang University
Z
Zhuo Chen
Zhejiang University
Lingbing Guo
Lingbing Guo
Tianjin University
Machine learningArtificial Intelligence
Y
Yajing Xu
Zhejiang University
M
Min Zhang
Harbin Institute of Technology
W
Wen Zhang
Zhejiang University
H
Huajun Chen
Zhejiang University