UniEdit: A Unified Knowledge Editing Benchmark for Large Language Models

📅 2025-05-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing LLM editing evaluation benchmarks suffer from narrow domain coverage, limited assessment dimensions, and insufficient consideration of ripple effects induced by edits. To address these limitations, we introduce OpenEditBench—the first open-domain, unified benchmark for evaluating large language model editing. It spans 25 diverse knowledge domains and incorporates the Neighborhood Multi-hop Chain Sampling (NMCS) algorithm to model both local and global cascading impacts of edits via multi-hop subgraph sampling. Leveraging knowledge graph triplet extraction and LLM-driven controllable text generation, OpenEditBench produces syntactically correct, lexically diverse, and semantically faithful natural-language evaluation instances. Extensive experiments validate its scale, domain breadth, and effect diversity. Furthermore, systematic evaluation across mainstream LLMs and editing methods reveals critical bottlenecks in cross-domain editing accuracy, generalization, and stability.

Technology Category

Application Category

📝 Abstract
Model editing aims to enhance the accuracy and reliability of large language models (LLMs) by efficiently adjusting their internal parameters. Currently, most LLM editing datasets are confined to narrow knowledge domains and cover a limited range of editing evaluation. They often overlook the broad scope of editing demands and the diversity of ripple effects resulting from edits. In this context, we introduce UniEdit, a unified benchmark for LLM editing grounded in open-domain knowledge. First, we construct editing samples by selecting entities from 25 common domains across five major categories, utilizing the extensive triple knowledge available in open-domain knowledge graphs to ensure comprehensive coverage of the knowledge domains. To address the issues of generality and locality in editing, we design an Neighborhood Multi-hop Chain Sampling (NMCS) algorithm to sample subgraphs based on a given knowledge piece to entail comprehensive ripple effects to evaluate. Finally, we employ proprietary LLMs to convert the sampled knowledge subgraphs into natural language text, guaranteeing grammatical accuracy and syntactical diversity. Extensive statistical analysis confirms the scale, comprehensiveness, and diversity of our UniEdit benchmark. We conduct comprehensive experiments across multiple LLMs and editors, analyzing their performance to highlight strengths and weaknesses in editing across open knowledge domains and various evaluation criteria, thereby offering valuable insights for future research endeavors.
Problem

Research questions and friction points this paper is trying to address.

Enhancing LLM accuracy via efficient parameter adjustments
Addressing limited scope in current editing datasets
Evaluating ripple effects of edits across domains
Innovation

Methods, ideas, or system contributions that make the work stand out.

Unified benchmark for open-domain LLM editing
Neighborhood Multi-hop Chain Sampling algorithm
Knowledge subgraphs converted to natural language
🔎 Similar Papers
No similar papers found.
Qizhou Chen
Qizhou Chen
ECNU
Natural Language ProcessingComputer Vision
D
Dakan Wang
Exacity Inc., Shanghai, China
Taolin Zhang
Taolin Zhang
Hefei University of Technology
LLMVLLMDeep Learning
Z
Zaoming Yan
East China Normal University, Shanghai, China
C
Chengsong You
East China Normal University, Shanghai, China
Chengyu Wang
Chengyu Wang
Alibaba Group
Natural Language ProcessingLarge Language ModelMulti-modal Learning
X
Xiaofeng He
East China Normal University, Shanghai, China