Interpreting and Editing Vision-Language Representations to Mitigate Hallucinations

πŸ“… 2024-10-03
πŸ›οΈ arXiv.org
πŸ“ˆ Citations: 4
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
To address the pervasive hallucination problem in vision-language models (VLMs)β€”i.e., generation of spurious object descriptions inconsistent with image contentβ€”this paper proposes a fine-tuning-free latent-space intervention method. The approach comprises two key components: (1) linearly projecting image features into the language token embedding space to enable interpretable identification and spatial localization of hallucinated objects; and (2) designing a knowledge-erasure algorithm that orthogonally suppresses hallucination-correlated features in a targeted manner. This work is the first to unify interpretability analysis and controllable editing of implicit VLM representations, enabling zero-shot segmentation and precise hallucination suppression. Evaluated on COCO2014, the method reduces hallucination rate by 25.7% while preserving original vision-language understanding performance without degradation.

Technology Category

Application Category

πŸ“ Abstract
We investigate the internal representations of vision-language models (VLMs) to address hallucinations, a persistent challenge despite advances in model size and training. We project VLMs' internal image representations to their language vocabulary and observe more confident output probabilities on real objects than hallucinated objects. We additionally use these output probabilities to spatially localize real objects. Building on this approach, we introduce a knowledge erasure algorithm that removes hallucinations by linearly orthogonalizing image features with respect to hallucinated object features. We show that targeted edits to a model's latent representations can reduce hallucinations by up to 25.7% on the COCO2014 dataset while preserving performance. Our findings demonstrate how a deeper understanding of VLMs' latent representations can enhance reliability and enable novel capabilities, such as zero-shot segmentation.
Problem

Research questions and friction points this paper is trying to address.

Addressing hallucinations in vision-language models through internal representation analysis.
Developing a knowledge erasure algorithm to mitigate hallucinated object features.
Enhancing model reliability and enabling zero-shot segmentation capabilities.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Projects image to language vocabulary
Uses output probabilities for localization
Introduces knowledge erasure algorithm linearly
πŸ”Ž Similar Papers
N
Nick Jiang
University of California, Berkeley
A
Anish Kachinthaya
University of California, Berkeley
S
Suzie Petryk
University of California, Berkeley
Yossi Gandelsman
Yossi Gandelsman
Reve.art
Deep LearningComputer VisionMachine Learning