🤖 AI Summary
Existing knowledge editing methods for large language models (LLMs) and large multimodal models (LMMs) are restricted to entity-level, triplet-form knowledge, failing to address complex visual-semantic associations and user-specific customization requirements in real-world applications. Method: We introduce MMKE-Bench—the first practical multimodal knowledge editing benchmark—comprising three task categories: visual entity editing, visual-semantic editing, and user-customized editing. It supports natural-language knowledge representations, containing 2,940 knowledge items and 8,363 images across 33 categories. Our evaluation framework combines automated question generation with human verification and systematically assesses five state-of-the-art editing methods on three mainstream LMMs. Contribution/Results: Experiments reveal significant performance gaps in visual-semantic and user-customized editing. We pioneer semantic-level and user-specific editing paradigms and establish a multi-dimensional robustness evaluation protocol, advancing multimodal knowledge editing toward practical deployment.
📝 Abstract
Knowledge editing techniques have emerged as essential tools for updating the factual knowledge of large language models (LLMs) and multimodal models (LMMs), allowing them to correct outdated or inaccurate information without retraining from scratch. However, existing benchmarks for multimodal knowledge editing primarily focus on entity-level knowledge represented as simple triplets, which fail to capture the complexity of real-world multimodal information. To address this issue, we introduce MMKE-Bench, a comprehensive MultiModal Knowledge Editing Benchmark, designed to evaluate the ability of LMMs to edit diverse visual knowledge in real-world scenarios. MMKE-Bench addresses these limitations by incorporating three types of editing tasks: visual entity editing, visual semantic editing, and user-specific editing. Besides, MMKE-Bench uses free-form natural language to represent and edit knowledge, offering a more flexible and effective format. The benchmark consists of 2,940 pieces of knowledge and 8,363 images across 33 broad categories, with evaluation questions automatically generated and human-verified. We assess five state-of-the-art knowledge editing methods on three prominent LMMs, revealing that no method excels across all criteria, and that visual and user-specific edits are particularly challenging. MMKE-Bench sets a new standard for evaluating the robustness of multimodal knowledge editing techniques, driving progress in this rapidly evolving field.