AVHBench: A Cross-Modal Hallucination Benchmark for Audio-Visual Large Language Models

📅 2024-10-23
🏛️ arXiv.org
📈 Citations: 2
Influential: 0
📄 PDF
🤖 AI Summary
Audio-visual large language models (AV-LLMs) suffer from cross-modal hallucination—erroneous associations between audio and visual signals—yet lack dedicated, standardized evaluation benchmarks. Method: We introduce AVHBench, the first benchmark specifically designed to evaluate cross-modal hallucination in AV-LLMs. It formally defines and quantifies such hallucinations, establishes a three-dimensional evaluation framework covering perception, alignment matching, and multimodal reasoning, and constructs a test set via a synergistic strategy combining multi-granularity aligned samples with human annotation and adversarial perturbation to enable fine-grained attribution analysis. Results: Experiments reveal that state-of-the-art AV-LLMs are consistently vulnerable to modality-crossing interference, inducing widespread hallucination. Crucially, fine-tuning solely on AVHBench significantly enhances hallucination robustness. This work provides foundational tools and a methodological framework for trustworthy evaluation and optimization of audio-visual multimodal models.

Technology Category

Application Category

📝 Abstract
Following the success of Large Language Models (LLMs), expanding their boundaries to new modalities represents a significant paradigm shift in multimodal understanding. Human perception is inherently multimodal, relying not only on text but also on auditory and visual cues for a complete understanding of the world. In recognition of this fact, audio-visual LLMs have recently emerged. Despite promising developments, the lack of dedicated benchmarks poses challenges for understanding and evaluating models. In this work, we show that audio-visual LLMs struggle to discern subtle relationships between audio and visual signals, leading to hallucinations and highlighting the need for reliable benchmarks. To address this, we introduce AVHBench, the first comprehensive benchmark specifically designed to evaluate the perception and comprehension capabilities of audio-visual LLMs. Our benchmark includes tests for assessing hallucinations, as well as the cross-modal matching and reasoning abilities of these models. Our results reveal that most existing audio-visual LLMs struggle with hallucinations caused by cross-interactions between modalities, due to their limited capacity to perceive complex multimodal signals and their relationships. Additionally, we demonstrate that simple training with our AVHBench improves robustness of audio-visual LLMs against hallucinations. Dataset: https://github.com/kaist-ami/AVHBench
Problem

Research questions and friction points this paper is trying to address.

Evaluate audio-visual LLMs' cross-modal perception and comprehension.
Address hallucinations caused by audio-visual signal misinterpretations.
Develop a benchmark to improve robustness against hallucinations.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introduces AVHBench for audio-visual LLMs evaluation
Assesses hallucinations and cross-modal reasoning abilities
Improves model robustness through AVHBench training
🔎 Similar Papers
No similar papers found.