🤖 AI Summary
Current MLLM evaluation lacks theoretically grounded, structurally principled, and cognitively interpretable benchmarks—suffering from heuristic task grouping, ill-defined capability taxonomies, redundant metrics, and weak diagnostic power. To address this, we propose GOLD: the first hierarchical evaluation framework for MLLMs grounded in Piaget’s theory of cognitive development, organizing capabilities into three orthogonal levels—Perception, Memory, and Reasoning. GOLD pioneers the integration of Structural Equation Modeling (SEM) into MLLM assessment, enabling rigorous capability dimension orthogonalization, quantification of internal validity, and fine-grained contribution attribution. Through task remapping and metric decoupling, GOLD achieves substantial improvements: +42% expert agreement, −68% metric redundancy, and a 3.1× increase in cross-task capability separation. Its diagnostic accuracy surpasses mainstream benchmarks including MMBench and OCRBench.
📝 Abstract
Evaluating multimodal large language models (MLLMs) is fundamentally challenged by the absence of structured, interpretable, and theoretically grounded benchmarks; current heuristically-grouped tasks have vague cognitive targets, overlapping abilities, redundant indicators, and weak diagnostic power. We therefore propose a structural-equation-modeling-aligned framework that quantifies internal validity, dimensional separability, and component contributions, and introduce a Piaget-inspired capability hierarchy that stratifies MLLM abilities into Perception, Memory, and Reasoning. Reorganizing existing tasks under this theory, we build the GOLD benchmark, whose experiments show superior interpretability, lower indicator redundancy, and clearer cognitive consistency than prior benchmarks.