Evaluation of Deep Audio Representations for Hearables

📅 2025-02-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the need for acoustic environment perception and speech source resolution in hearables, this paper introduces DEAR—the first dedicated deep audio representation benchmark tailored for hearables. DEAR comprises 1,158 spatially mixed 30-second audio clips and eight multi-dimensional acoustic scene understanding tasks. It proposes a novel, multi-task, disentangled evaluation framework specifically designed for hearables, enabling the first systematic assessment of foundation models across three critical capabilities: environmental context recognition, speech source localization, and technical acoustic attribute modeling. Experiments using general-purpose audio models (e.g., BEATs) demonstrate their superior performance over other baselines, validating the pivotal role of spatial audio synthesis and diverse pretraining in enhancing generalization to hearable-specific scenarios. DEAR is released as an open-source, high-quality dataset, filling a critical gap in dedicated evaluation benchmarks for hearable-oriented audio representation learning.

Technology Category

Application Category

📝 Abstract
Effectively steering hearable devices requires understanding the acoustic environment around the user. In the computational analysis of sound scenes, foundation models have emerged as the state of the art to produce high-performance, robust, multi-purpose audio representations. We introduce and release Deep Evaluation of Audio Representations (DEAR), the first dataset and benchmark to evaluate the efficacy of foundation models in capturing essential acoustic properties for hearables. The dataset includes 1,158 audio tracks, each 30 seconds long, created by spatially mixing proprietary monologues with commercial, high-quality recordings of everyday acoustic scenes. Our benchmark encompasses eight tasks that assess the general context, speech sources, and technical acoustic properties of the audio scenes. Through our evaluation of four general-purpose audio representation models, we demonstrate that the BEATs model significantly surpasses its counterparts. This superiority underscores the advantage of models trained on diverse audio collections, confirming their applicability to a wide array of auditory tasks, including encoding the environment properties necessary for hearable steering. The DEAR dataset and associated code are available at https://dear-dataset.github.io.
Problem

Research questions and friction points this paper is trying to address.

Evaluate audio representations for hearable devices
Assess foundation models in acoustic scene analysis
Develop benchmark for encoding essential acoustic properties
Innovation

Methods, ideas, or system contributions that make the work stand out.

Deep Audio Representations
Foundation Models Evaluation
BEATs Model Superiority
🔎 Similar Papers
No similar papers found.
F
Fabian Groger
Lucerne University of Applied Sciences and Arts, Rotkreuz, Switzerland
P
Pascal Baumann
Lucerne University of Applied Sciences and Arts, Rotkreuz, Switzerland
L
L. Amruthalingam
Lucerne University of Applied Sciences and Arts, Rotkreuz, Switzerland
L
Laurent Simon
Sonova AG, Stäfa, Switzerland
R
Ruksana Giurda
Sonova AG, Stäfa, Switzerland
Simone Lionetti
Simone Lionetti
Senior Research Associate, HSLU
Machine LearningTheoretical Particle Physics