Soundscape Captioning using Sound Affective Quality Network and Large Language Model

📅 2024-06-09
🏛️ arXiv.org
📈 Citations: 4
Influential: 0
📄 PDF
🤖 AI Summary
Existing computational auditory scene analysis (CASA) methods focus exclusively on objective acoustic attributes, neglecting human affective responses to sound. Method: We introduce “soundscape description” as a novel task—generating context-aware, natural-language descriptions that jointly encode acoustic scenes, sound events, and affective quality (AQ) in an end-to-end manner, replacing labor-intensive subjective evaluations. Our approach integrates a multi-scale acoustic model, SoundAQnet—which jointly models scene, event, and AQ—with a large language model (LLM) to enable unified generation from raw audio to subjective perceptual descriptions. Contribution/Results: Experiments demonstrate high agreement between generated descriptions and annotations by two domain experts (mean score difference: 0.21/0.25; *p* > 0.05), and strong generalization across diverse external datasets. This work establishes a scalable, human-centered paradigm for auditory perception modeling and soundscape assessment.

Technology Category

Application Category

📝 Abstract
We live in a rich and varied acoustic world, which is experienced by individuals or communities as a soundscape. Computational auditory scene analysis, disentangling acoustic scenes by detecting and classifying events, focuses on objective attributes of sounds, such as their category and temporal characteristics, ignoring their effects on people, such as the emotions they evoke within a context. To fill this gap, we propose the soundscape captioning task, which enables automated soundscape analysis, thus avoiding labour-intensive subjective ratings and surveys in conventional methods. With soundscape captioning, context-aware descriptions are generated for soundscape by capturing the acoustic scene, event information, and the corresponding human affective qualities (AQs). To this end, we propose an automatic soundscape captioner (SoundSCaper) system composed of an acoustic model, i.e. SoundAQnet, and a large language model (LLM). SoundAQnet simultaneously models multi-scale information about acoustic scenes, events, and perceived AQs, while the LLM describes the soundscape with captions by parsing the information captured with SoundAQnet. The soundscape caption's quality is assessed by a jury of 16 audio/soundscape experts. The average score (out of 5) of SoundSCaper-generated captions is lower than the score of captions generated by two soundscape experts by 0.21 and 0.25, respectively, on the evaluation set and the model-unknown mixed external dataset with varying lengths and acoustic properties, but the differences are not statistically significant. Overall, the proposed SoundSCaper shows promising performance, with captions generated being comparable to those annotated by soundscape experts. The code of models, LLM scripts, human assessment data and instructions, and expert evaluation statistics are all publicly available.
Problem

Research questions and friction points this paper is trying to address.

Automates soundscape analysis to avoid labor-intensive subjective ratings
Generates context-aware descriptions capturing acoustic scenes and human emotions
Integrates acoustic modeling with large language models for captioning
Innovation

Methods, ideas, or system contributions that make the work stand out.

SoundAQnet models multi-scale acoustic scenes and affective qualities
LLM generates captions by parsing SoundAQnet output information
Combines acoustic analysis with affective quality network and LLM
🔎 Similar Papers
No similar papers found.