MMKE-Bench: A Multimodal Editing Benchmark for Diverse Visual Knowledge

📅 2025-02-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing knowledge editing methods for large language models (LLMs) and large multimodal models (LMMs) are restricted to entity-level, triplet-form knowledge, failing to address complex visual-semantic associations and user-specific customization requirements in real-world applications. Method: We introduce MMKE-Bench—the first practical multimodal knowledge editing benchmark—comprising three task categories: visual entity editing, visual-semantic editing, and user-customized editing. It supports natural-language knowledge representations, containing 2,940 knowledge items and 8,363 images across 33 categories. Our evaluation framework combines automated question generation with human verification and systematically assesses five state-of-the-art editing methods on three mainstream LMMs. Contribution/Results: Experiments reveal significant performance gaps in visual-semantic and user-customized editing. We pioneer semantic-level and user-specific editing paradigms and establish a multi-dimensional robustness evaluation protocol, advancing multimodal knowledge editing toward practical deployment.

Technology Category

Application Category

📝 Abstract
Knowledge editing techniques have emerged as essential tools for updating the factual knowledge of large language models (LLMs) and multimodal models (LMMs), allowing them to correct outdated or inaccurate information without retraining from scratch. However, existing benchmarks for multimodal knowledge editing primarily focus on entity-level knowledge represented as simple triplets, which fail to capture the complexity of real-world multimodal information. To address this issue, we introduce MMKE-Bench, a comprehensive MultiModal Knowledge Editing Benchmark, designed to evaluate the ability of LMMs to edit diverse visual knowledge in real-world scenarios. MMKE-Bench addresses these limitations by incorporating three types of editing tasks: visual entity editing, visual semantic editing, and user-specific editing. Besides, MMKE-Bench uses free-form natural language to represent and edit knowledge, offering a more flexible and effective format. The benchmark consists of 2,940 pieces of knowledge and 8,363 images across 33 broad categories, with evaluation questions automatically generated and human-verified. We assess five state-of-the-art knowledge editing methods on three prominent LMMs, revealing that no method excels across all criteria, and that visual and user-specific edits are particularly challenging. MMKE-Bench sets a new standard for evaluating the robustness of multimodal knowledge editing techniques, driving progress in this rapidly evolving field.
Problem

Research questions and friction points this paper is trying to address.

Evaluate multimodal knowledge editing in LMMs
Address complexity in real-world visual information
Benchmark diverse visual and user-specific edits
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multimodal Knowledge Editing Benchmark
Visual Entity and Semantic Editing
Free-form Natural Language Representation
🔎 Similar Papers
No similar papers found.
Yuntao Du
Yuntao Du
Purdue University
Privacy
K
Kailin Jiang
University of Science and Technology of China; State Key Laboratory of General Artificial Intelligence, BIGAI
Z
Zhi Gao
State Key Laboratory of General Artificial Intelligence, BIGAI; State Key Laboratory of General Artificial Intelligence, Peking University
Chenrui Shi
Chenrui Shi
Beijing Institute of Technology
anomaly detection
Z
Zilong Zheng
State Key Laboratory of General Artificial Intelligence, BIGAI
Siyuan Qi
Siyuan Qi
Gyges Labs
Machine LearningComputer Vision
Q
Qing Li
State Key Laboratory of General Artificial Intelligence, BIGAI