On Path to Multimodal Generalist: General-Level and General-Bench

📅 2025-05-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing evaluation benchmarks inadequately quantify the progress of multimodal large language models (MLLMs) toward human-level multimodal general intelligence. To address this, we propose the first systematic evaluation paradigm for multimodal general intelligence. Our method introduces: (1) General-Level, a novel five-tier taxonomy for quantifying multimodal generality; (2) Synergy, a core metric measuring cross-modal understanding–generation consistency and cooperative capability; and (3) General-Bench, a large-scale unified benchmark encompassing 700+ tasks and 325K samples, covering the broadest spectrum of modalities, formats, and capability dimensions to date. Applying this framework, we systematically evaluate over 100 state-of-the-art MLLMs, revealing a substantial gap between current models and true multimodal general intelligence. Our work establishes a quantitative, reproducible, and scalable assessment pathway toward artificial general intelligence.

Technology Category

Application Category

📝 Abstract
The Multimodal Large Language Model (MLLM) is currently experiencing rapid growth, driven by the advanced capabilities of LLMs. Unlike earlier specialists, existing MLLMs are evolving towards a Multimodal Generalist paradigm. Initially limited to understanding multiple modalities, these models have advanced to not only comprehend but also generate across modalities. Their capabilities have expanded from coarse-grained to fine-grained multimodal understanding and from supporting limited modalities to arbitrary ones. While many benchmarks exist to assess MLLMs, a critical question arises: Can we simply assume that higher performance across tasks indicates a stronger MLLM capability, bringing us closer to human-level AI? We argue that the answer is not as straightforward as it seems. This project introduces General-Level, an evaluation framework that defines 5-scale levels of MLLM performance and generality, offering a methodology to compare MLLMs and gauge the progress of existing systems towards more robust multimodal generalists and, ultimately, towards AGI. At the core of the framework is the concept of Synergy, which measures whether models maintain consistent capabilities across comprehension and generation, and across multiple modalities. To support this evaluation, we present General-Bench, which encompasses a broader spectrum of skills, modalities, formats, and capabilities, including over 700 tasks and 325,800 instances. The evaluation results that involve over 100 existing state-of-the-art MLLMs uncover the capability rankings of generalists, highlighting the challenges in reaching genuine AI. We expect this project to pave the way for future research on next-generation multimodal foundation models, providing a robust infrastructure to accelerate the realization of AGI. Project page: https://generalist.top/
Problem

Research questions and friction points this paper is trying to address.

Evaluating MLLM performance and generality across multiple modalities
Assessing synergy between comprehension and generation in MLLMs
Developing a benchmark for robust multimodal generalist AI
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introduces General-Level evaluation framework for MLLMs
Proposes Synergy concept for multimodal consistency
Develops General-Bench with 700 tasks and 325,800 instances
🔎 Similar Papers
No similar papers found.
Hao Fei
Hao Fei
National University of Singapore
Vision and LanguageLarge Language ModelNatural Language ProcessingWorld Modeling
Y
Yuan Zhou
NTU
Juncheng Li
Juncheng Li
East China Normal University
Super ResolutionImage RestorationComputer VisionMedical Image Analysis
Xiangtai Li
Xiangtai Li
Research Scientist, Tiktok, SG; MMLab@NTU
Generative AIComputer Vision
Qingshan Xu
Qingshan Xu
Nanyang Technological University
Computer Vision3D Reconstruction
Bobo Li
Bobo Li
National University of Singapore
Natural Language Processing
Shengqiong Wu
Shengqiong Wu
National University of Singapore
Multimodal LearningVisual ModelingLarge Language ModelNatural Language Processing
Yaoting Wang
Yaoting Wang
Fudan University
Multimodal LLMOmni MLLMAudio-Visual LearningSegmentation
Junbao Zhou
Junbao Zhou
Ph.D Student
Computer Vision3D Vision
Jiahao Meng
Jiahao Meng
Peking University
Qingyu Shi
Qingyu Shi
Peking University
computer visiondiffusionmultimodal
Zhiyuan Zhou
Zhiyuan Zhou
PhD student, UC Berkeley
RoboticsReinforcement Learning
L
Liangtao Shi
HFUT
Minghe Gao
Minghe Gao
浙江大学
机器学习
Daoan Zhang
Daoan Zhang
PhD Student, University of Rochester
Computer VisionMultimodal LearningLLM
Z
Zhiqi Ge
ZJU
W
Weiming Wu
NJU
Siliang Tang
Siliang Tang
Professor of Computer Science, Zhejiang University
Natural Language ProcessingCross-media AnalysisGraph Neural Network
Kaihang Pan
Kaihang Pan
Zhejiang University
nlpvision-and-language
Y
Yaobo Ye
ZJU
Haobo Yuan
Haobo Yuan
UC Merced
Computer VisionDeep Learning
T
Tao Zhang
WHU
Tianjie Ju
Tianjie Ju
Shanghai Jiao Tong University
Natural Langeuage Processing
Z
Zixiang Meng
WHU
Shilin Xu
Shilin Xu
Peking University
Computer Vision
Liyu Jia
Liyu Jia
Nanyang Technological University
Wentao Hu
Wentao Hu
PhD student, The Hong Kong Polytechnic University
Large Language ModelComputer Vision
Meng Luo
Meng Luo
National University of Singapore
Human-Centered AIMultimodal UnderstandingMultimodal Reasoning
J
Jiebo Luo
UR
T
Tat-Seng Chua
NUS
S
Shuicheng Yan
NUS
H
Hanwang Zhang
NTU