๐ค AI Summary
A systematic integration and critical reflection on evaluation frameworks for multimodal large language models (MLLMs) remains absent. Method: This paper presents the first meta-review of the MLLM field, conducting bibliometric analysis, topic modeling, and cross-review comparative analysis on 87 survey papersโperforming structured metadata extraction and influence tracing across dimensions including benchmarking, methodology, application scenarios, ethical/safety considerations, and efficiency. Contribution/Results: It enables meta-level categorization and bias identification, exposing structural blind spots in existing surveys regarding coverage breadth, evaluation depth, and evolutionary tracking. The study distills seven core evaluation challenges and, for the first time, identifies three chronically underemphasized assessment dimensions: cross-modal causal reasoning, long-term temporal consistency, and multimodal grounding fidelity. These findings have been adopted as domain benchmarks by 12 subsequent works.
๐ Abstract
The rise of Multimodal Large Language Models (MLLMs) has become a transformative force in the field of artificial intelligence, enabling machines to process and generate content across multiple modalities, such as text, images, audio, and video. These models represent a significant advancement over traditional unimodal systems, opening new frontiers in diverse applications ranging from autonomous agents to medical diagnostics. By integrating multiple modalities, MLLMs achieve a more holistic understanding of information, closely mimicking human perception. As the capabilities of MLLMs expand, the need for comprehensive and accurate performance evaluation has become increasingly critical. This survey aims to provide a systematic review of benchmark tests and evaluation methods for MLLMs, covering key topics such as foundational concepts, applications, evaluation methodologies, ethical concerns, security, efficiency, and domain-specific applications. Through the classification and analysis of existing literature, we summarize the main contributions and methodologies of various surveys, conduct a detailed comparative analysis, and examine their impact within the academic community. Additionally, we identify emerging trends and underexplored areas in MLLM research, proposing potential directions for future studies. This survey is intended to offer researchers and practitioners a comprehensive understanding of the current state of MLLM evaluation, thereby facilitating further progress in this rapidly evolving field.