Attribution Analysis Meets Model Editing: Advancing Knowledge Correction in Vision Language Models with VisEdit

๐Ÿ“… 2024-08-19
๐Ÿ›๏ธ arXiv.org
๐Ÿ“ˆ Citations: 1
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This study addresses the critical bottleneck in vision-language model (VLM) knowledge correctionโ€”namely, the necessity of full retraining. It is the first to systematically identify the pivotal role of mid-to-late-layer visual representations and prompt-correlated regions in factual prediction. Building on this insight, we propose VisEdit, the first interpretability-driven, training-free editing framework for VLMs. VisEdit enables precise cross-modal intervention via three core mechanisms: contribution allocation analysis, noise-perturbation attribution, and intermediate visual feature localization. Evaluated across multiple mainstream VLM backbones and established public editing benchmarks, VisEdit consistently outperforms LLM-adapted editors, achieving an average 12.6% improvement in editing accuracy. Moreover, it demonstrates strong generalization and minimal side effects. This work establishes a novel paradigm for efficient, interpretable, and parameter-efficient knowledge editing in vision-language models.

Technology Category

Application Category

๐Ÿ“ Abstract
Model editing aims to correct outdated or erroneous knowledge in large models without costly retraining. Recent research discovered that the mid-layer representation of the subject's final token in a prompt has a strong influence on factual predictions, and developed Large Language Model (LLM) editing techniques based on this observation. However, for Vision-LLMs (VLLMs), how visual representations impact the predictions from a decoder-only language model remains largely unexplored. To the best of our knowledge, model editing for VLLMs has not been extensively studied in the literature. In this work, we employ the contribution allocation and noise perturbation methods to measure the contributions of visual representations for token predictions. Our attribution analysis shows that visual representations in mid-to-later layers that are highly relevant to the prompt contribute significantly to predictions. Based on these insights, we propose VisEdit, a novel model editor for VLLMs that effectively corrects knowledge by editing intermediate visual representations in regions important to the edit prompt. We evaluated VisEdit using multiple VLLM backbones and public VLLM editing benchmark datasets. The results show the superiority of VisEdit over the strong baselines adapted from existing state-of-the-art editors for LLMs.
Problem

Research questions and friction points this paper is trying to address.

Visual Language Models
Knowledge Editing
Prediction Mechanism
Innovation

Methods, ideas, or system contributions that make the work stand out.

VisEdit
Visual Language Model Editing
Knowledge Accuracy Improvement
Qizhou Chen
Qizhou Chen
ECNU
Natural Language ProcessingComputer Vision
Taolin Zhang
Taolin Zhang
Hefei University of Technology
LLMVLLMDeep Learning
Chengyu Wang
Chengyu Wang
Alibaba Group
Natural Language ProcessingLarge Language ModelMulti-modal Learning
X
Xiaofeng He
East China Normal University, Shanghai, China
D
Dakan Wang
Exacity Inc., Shanghai, China
T
Tingting Liu
Alibaba Group, Hangzhou, China