MMKU-Bench: A Multimodal Update Benchmark for Diverse Visual Knowledge

📅 2026-03-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of updating multimodal models with evolving real-world knowledge after pretraining, a task hindered by existing methods that neglect the need to revise previously acquired knowledge and lack mechanisms for evaluating cross-modal consistency. To bridge this gap, we introduce MMKU-Bench, a comprehensive benchmark comprising over 25k knowledge instances and 49k images, which uniquely supports systematic evaluation of both “known knowledge updating” and “unknown knowledge learning” scenarios while incorporating cross-modal consistency analysis. Using this benchmark, we conduct a systematic assessment of prevalent updating strategies—including supervised fine-tuning (SFT), reinforcement learning from human feedback (RLHF), and knowledge editing (KE)—revealing that SFT and RLHF are prone to catastrophic forgetting, whereas KE better preserves general capabilities yet still exhibits significant limitations in continuous knowledge updating.

Technology Category

Application Category

📝 Abstract
As real-world knowledge continues to evolve, the parametric knowledge acquired by multimodal models during pretraining becomes increasingly difficult to remain consistent with real-world knowledge. Existing research on multimodal knowledge updating focuses only on learning previously unknown knowledge, while overlooking the need to update knowledge that the model has already mastered but that later changes; moreover, evaluation is limited to the same modality, lacking a systematic analysis of cross-modal consistency. To address these issues, this paper proposes MMKU-Bench, a comprehensive evaluation benchmark for multimodal knowledge updating, which contains over 25k knowledge instances and more than 49k images, covering two scenarios, updated knowledge and unknown knowledge, thereby enabling comparative analysis of learning across different knowledge types. On this benchmark, we evaluate a variety of representative approaches, including supervised fine-tuning (SFT), reinforcement learning from human feedback (RLHF), and knowledge editing (KE). Experimental results show that SFT and RLHF are prone to catastrophic forgetting, while KE better preserve general capabilities but exhibit clear limitations in continual updating. Overall, MMKU-Bench provides a reliable and comprehensive evaluation benchmark for multimodal knowledge updating, advancing progress in this field.
Problem

Research questions and friction points this paper is trying to address.

multimodal knowledge updating
knowledge evolution
cross-modal consistency
catastrophic forgetting
Innovation

Methods, ideas, or system contributions that make the work stand out.

multimodal knowledge updating
knowledge benchmark
cross-modal consistency
catastrophic forgetting
knowledge editing
🔎 Similar Papers
No similar papers found.
B
Baochen Fu
Shandong University, Jinan, China
Yuntao Du
Yuntao Du
Purdue University
Privacy
C
Cheng Chang
Shandong University, Jinan, China
B
Baihao Jin
Shandong University, Jinan, China
W
Wenzhi Deng
Shandong University, Jinan, China
Muhao Xu
Muhao Xu
PhD ShanDong university
H
Hongmei Yan
Jinzhong Group, Jinan, China
Weiye Song
Weiye Song
Post Doctoral Fellow,Harvard Medical School,Massachusetts General Hospital Wellman Center
Yi Wan
Yi Wan
Pokee AI
reinforcement learning