🤖 AI Summary
Low signal-to-noise ratio and complex spatiotemporal dynamics in functional near-infrared spectroscopy (fNIRS) signals impede accurate decoding of auditory brain responses. To address this, we propose AbsoluteNet—a novel end-to-end deep neural network integrating spatiotemporal convolutional layers with a custom ReLU variant activation function. AbsoluteNet innovatively enables adaptive temporal dynamic modeling and cross-channel hemodynamic feature aggregation, overcoming the physiological modeling limitations inherent in conventional fNIRS analysis methods. Evaluated on an auditory event-related binary classification task, AbsoluteNet achieves 87.0% accuracy—outperforming the best baseline by 3.8%—with 84.8% sensitivity and 89.2% specificity. These results demonstrate substantially improved decoding robustness and enhanced clinical applicability.
📝 Abstract
In recent years, deep learning (DL) approaches have demonstrated promising results in decoding hemodynamic responses captured by functional near-infrared spectroscopy (fNIRS), particularly in the context of brain-computer interface (BCI) applications. This work introduces AbsoluteNet, a novel deep learning architecture designed to classify auditory event-related responses recorded using fNIRS. The proposed network is built upon principles of spatio-temporal convolution and customized activation functions. Our model was compared against several models, namely fNIRSNET, MDNN, DeepConvNet, and ShallowConvNet. The results showed that AbsoluteNet outperforms existing models, reaching 87.0% accuracy, 84.8% sensitivity, and 89.2% specificity in binary classification, surpassing fNIRSNET, the second-best model, by 3.8% in accuracy. These findings underscore the effectiveness of our proposed deep learning model in decoding hemodynamic responses related to auditory processing and highlight the importance of spatio-temporal feature aggregation and customized activation functions to better fit fNIRS dynamics.