AudioBERT: Audio Knowledge Augmented Language Model

📅 2024-09-12
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current language models (LMs) suffer from a fundamental lack of auditory commonsense knowledge, severely limiting their reasoning capabilities in audio-related tasks. To address this, we first systematically expose critical auditory knowledge deficits in mainstream LMs and introduce AuditoryBench—the first comprehensive benchmark for evaluating auditory understanding. We then propose AudioBERT, a novel framework that (1) retrieves audio feature knowledge via cross-modal retrieval, (2) injects this knowledge into BERT’s representations through a dedicated knowledge injection mechanism, and (3) employs LoRA-based low-rank adaptation for efficient fine-tuning. This “retrieve–inject–adapt” paradigm enables lightweight, scalable auditory knowledge enhancement. Experiments demonstrate that AudioBERT significantly outperforms all baselines on AuditoryBench, validating that explicit auditory knowledge integration substantially improves LMs’ cross-modal comprehension capabilities.

Technology Category

Application Category

📝 Abstract
Recent studies have identified that language models, pretrained on text-only datasets, often lack elementary visual knowledge, extit{e.g.,} colors of everyday objects. Motivated by this observation, we ask whether a similar shortcoming exists in terms of the extit{auditory} knowledge. To answer this question, we construct a new dataset called AuditoryBench, which consists of two novel tasks for evaluating auditory knowledge. Based on our analysis using the benchmark, we find that language models also suffer from a severe lack of auditory knowledge. To address this limitation, we propose AudioBERT, a novel method to augment the auditory knowledge of BERT through a retrieval-based approach. First, we detect auditory knowledge spans in prompts to query our retrieval model efficiently. Then, we inject audio knowledge into BERT and switch on low-rank adaptation for effective adaptation when audio knowledge is required. Our experiments demonstrate that AudioBERT is quite effective, achieving superior performance on the AuditoryBench. The dataset and code are available at ulurl{https://github.com/HJ-Ok/AudioBERT}.
Problem

Research questions and friction points this paper is trying to address.

Language Models
Audio Knowledge
Commonsense Understanding
Innovation

Methods, ideas, or system contributions that make the work stand out.

AudioBERT
Language Model Training
Audio Input
🔎 Similar Papers
No similar papers found.
H
Hyunjong Ok
POSTECH, HJ AILAB
S
Suho Yoo
Inha University
J
Jaeho Lee
POSTECH