🤖 AI Summary
This work addresses the challenge of event confusion in multi-label audio tagging caused by acoustic similarity by introducing a novel task, Geospatial Audio Tagging (Geo-AT), which leverages semantic context from geographic information systems—such as points of interest—to enhance environmental sound understanding. To support this task, we present Geo-ATBench, the first audio tagging benchmark integrating geospatial semantics, and propose GeoFusion-AT, a unified multi-level fusion framework that jointly models audio and geographic information at the feature, representation, and decision levels. Experimental results demonstrate that incorporating geographic context substantially improves model performance on acoustically confusable labels, achieving accuracy comparable to human annotators and confirming the human alignment of the proposed benchmark.
📝 Abstract
Environmental sound understanding in computational auditory scene analysis (CASA) is often formulated as an audio-only recognition problem. This formulation leaves a persistent drawback in multi-label audio tagging (AT): acoustic similarity can make certain events difficult to separate from waveforms alone. In such cases, disambiguating cues often lie outside the waveform. Geospatial semantic context (GSC), derived from geographic information system data, e.g., points of interest (POI), provides location-tied environmental priors that can help reduce this ambiguity. A systematic study of this direction is enabled through the proposed geospatial audio tagging (Geo-AT) task, which conditions multi-label sound event tagging on GSC alongside audio. To benchmark Geo-AT, Geo-ATBench is introduced as a polyphonic audio benchmark with geographical annotations, containing 10.71 hours of audio across 28 event categories; each clip is paired with a GSC representation from 11 semantic context categories. GeoFusion-AT is proposed as a unified geo-audio fusion framework that evaluates feature-, representation-, and decision-level fusion on representative audio backbones, with audio- and GSC-only baselines. Results show that incorporating GSC improves AT performance, especially on acoustically confounded labels, indicating geospatial semantics provide effective priors beyond audio alone. A crowdsourced listening study with 10 participants on 579 samples shows that there is no significant difference in performance between models on Geo-ATBench labels and aggregated human labels, supporting Geo-ATBench as a human-aligned benchmark. The Geo-AT task, benchmark Geo-ATBench, and reproducible geo-audio fusion framework GeoFusion-AT provide a foundation for studying AT with geospatial semantic context within the CASA community. Dataset, code, models are on homepage (https://github.com/WuYanru2002/Geo-ATBench).