HEAR: Hearing Enhanced Audio Response for Video-grounded Dialogue

📅 2023-12-15
🏛️ Conference on Empirical Methods in Natural Language Processing
📈 Citations: 6
Influential: 0
📄 PDF
🤖 AI Summary
Existing video-grounded dialogue (VGD) systems frequently neglect the audio modality, leading to erroneous responses—termed “deaf responses”—in scenarios requiring auditory cues. This work formally defines the deaf-response phenomenon for the first time and proposes HEAR, a model-agnostic framework that enables on-demand audio perception via an attention-driven audio gating module. HEAR seamlessly integrates into diverse VGD backbones through multimodal feature alignment and cross-modal fusion mechanisms. Evaluated on AVSD@DSTC7/8, HEAR significantly improves response accuracy (+2.1 BLEU-4) and audio relevance (+14.3% audio-perception rate), demonstrating consistent gains across multiple baseline architectures. By providing a plug-and-play audio enhancement paradigm, HEAR advances multimodal dialogue systems toward robust, context-aware, and modality-adaptive generation.
📝 Abstract
Video-grounded Dialogue (VGD) aims to answer questions regarding a given multi-modal input comprising video, audio, and dialogue history. Although there have been numerous efforts in developing VGD systems to improve the quality of their responses, existing systems are competent only to incorporate the information in the video and text and tend to struggle in extracting the necessary information from the audio when generating appropriate responses to the question. The VGD system seems to be deaf, and thus, we coin this symptom of current systems' ignoring audio data as a deaf response. To overcome the deaf response problem, Hearing Enhanced Audio Response (HEAR) framework is proposed to perform sensible listening by selectively attending to audio whenever the question requires it. The HEAR framework enhances the accuracy and audibility of VGD systems in a model-agnostic manner. HEAR is validated on VGD datasets (i.e., AVSD@DSTC7 and AVSD@DSTC8) and shows effectiveness with various VGD systems.
Problem

Research questions and friction points this paper is trying to address.

VGD systems ignore audio data in responses
Existing systems struggle with audio information extraction
HEAR framework enhances audio-aware response accuracy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Selectively attends to audio for responses
Enhances accuracy in model-agnostic way
Validated on multiple VGD datasets
🔎 Similar Papers
2024-06-09Annual Meeting of the Association for Computational LinguisticsCitations: 13