AbsoluteNet: A Deep Learning Neural Network to Classify Cerebral Hemodynamic Responses of Auditory Processing

📅 2025-05-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Low signal-to-noise ratio and complex spatiotemporal dynamics in functional near-infrared spectroscopy (fNIRS) signals impede accurate decoding of auditory brain responses. To address this, we propose AbsoluteNet—a novel end-to-end deep neural network integrating spatiotemporal convolutional layers with a custom ReLU variant activation function. AbsoluteNet innovatively enables adaptive temporal dynamic modeling and cross-channel hemodynamic feature aggregation, overcoming the physiological modeling limitations inherent in conventional fNIRS analysis methods. Evaluated on an auditory event-related binary classification task, AbsoluteNet achieves 87.0% accuracy—outperforming the best baseline by 3.8%—with 84.8% sensitivity and 89.2% specificity. These results demonstrate substantially improved decoding robustness and enhanced clinical applicability.

Technology Category

Application Category

📝 Abstract
In recent years, deep learning (DL) approaches have demonstrated promising results in decoding hemodynamic responses captured by functional near-infrared spectroscopy (fNIRS), particularly in the context of brain-computer interface (BCI) applications. This work introduces AbsoluteNet, a novel deep learning architecture designed to classify auditory event-related responses recorded using fNIRS. The proposed network is built upon principles of spatio-temporal convolution and customized activation functions. Our model was compared against several models, namely fNIRSNET, MDNN, DeepConvNet, and ShallowConvNet. The results showed that AbsoluteNet outperforms existing models, reaching 87.0% accuracy, 84.8% sensitivity, and 89.2% specificity in binary classification, surpassing fNIRSNET, the second-best model, by 3.8% in accuracy. These findings underscore the effectiveness of our proposed deep learning model in decoding hemodynamic responses related to auditory processing and highlight the importance of spatio-temporal feature aggregation and customized activation functions to better fit fNIRS dynamics.
Problem

Research questions and friction points this paper is trying to address.

Classify auditory event-related fNIRS hemodynamic responses
Improve accuracy in decoding auditory processing brain signals
Enhance spatio-temporal feature aggregation for fNIRS dynamics
Innovation

Methods, ideas, or system contributions that make the work stand out.

Spatio-temporal convolution for fNIRS classification
Customized activation functions for better dynamics
Deep learning model outperforms existing benchmarks
🔎 Similar Papers
No similar papers found.
B
Behtom Adeli
Department of Electrical Engineering, Computer, & Biomedical Engineering, University of Rhode Island RI, USA
J
J. McLinden
Department of Electrical Engineering, Computer, & Biomedical Engineering, University of Rhode Island RI, USA
Pankaj Pandey
Pankaj Pandey
Multimodal Functional Brain Imaging Research Lab at St. Jude Children’s Research Hospital, TN, USA
Ming Shao
Ming Shao
UMass Lowell (Associate Professor)
Machine LearningData MiningComputer Vision
Y
Y. Shahriari
Department of Electrical Engineering, Computer, & Biomedical Engineering, University of Rhode Island RI, USA