Auditory Attention Decoding without Spatial Information: A Diotic EEG Study

📅 2026-01-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the limited generalizability of existing auditory attention decoding methods, which rely heavily on binaural spatial cues and thus struggle in realistic multi-talker scenarios where such cues are absent or ambiguous. To overcome this limitation, the authors propose a novel multimodal alignment approach operating under diotic conditions—where spatial cues are entirely eliminated. Their method leverages wav2vec 2.0 for speech feature extraction, a 1D CNN for speech encoding, and a BrainNetwork architecture to process EEG signals, aligning neural and speech representations in a shared latent space via cosine similarity to identify the attended speech stream. Evaluated on a diotic EEG dataset, the approach achieves a decoding accuracy of 72.70%, outperforming the current state-of-the-art direction-dependent method by 22.58% and significantly advancing auditory attention decoding toward more ecologically valid “cocktail party” listening conditions.

Technology Category

Application Category

📝 Abstract
Auditory attention decoding (AAD) identifies the attended speech stream in multi-speaker environments by decoding brain signals such as electroencephalography (EEG). This technology is essential for realizing smart hearing aids that address the cocktail party problem and for facilitating objective audiometry systems. Existing AAD research mainly utilizes dichotic environments where different speech signals are presented to the left and right ears, enabling models to classify directional attention rather than speech content. However, this spatial reliance limits applicability to real-world scenarios, such as the"cocktail party"situation, where speakers overlap or move dynamically. To address this challenge, we propose an AAD framework for diotic environments where identical speech mixtures are presented to both ears, eliminating spatial cues. Our approach maps EEG and speech signals into a shared latent space using independent encoders. We extract speech features using wav2vec 2.0 and encode them with a 2-layer 1D convolutional neural network (CNN), while employing the BrainNetwork architecture for EEG encoding. The model identifies the attended speech by calculating the cosine similarity between EEG and speech representations. We evaluate our method on a diotic EEG dataset and achieve 72.70% accuracy, which is 22.58% higher than the state-of-the-art direction-based AAD method.
Problem

Research questions and friction points this paper is trying to address.

Auditory Attention Decoding
Diotic Environment
Spatial Information
EEG
Cocktail Party Problem
Innovation

Methods, ideas, or system contributions that make the work stand out.

Auditory Attention Decoding
Diotic EEG
Shared Latent Space
wav2vec 2.0
Cocktail Party Problem
🔎 Similar Papers
No similar papers found.
M
Masahiro Yoshino
Department of Electronic and Information Engineering, School of Engineering, The University of Osaka, Suita, Japan
H
Haruki Yokota
Graduate School of Engineering, The University of Osaka, Suita, Japan
Junya Hara
Junya Hara
Osaka University
Graph signal processingSampling theory
Yuichi Tanaka
Yuichi Tanaka
The University of Osaka
Signal processingimage processing
Hiroshi Higashi
Hiroshi Higashi
The University of Osaka
Signal processingCognitive science