Toward Improving fNIRS Classification: A Study on Activation Functions in Deep Neural Architectures

📅 2025-07-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Functional near-infrared spectroscopy (fNIRS) signals suffer from low signal-to-noise ratio and high inter-subject variability, posing significant challenges for deep learning–based classification. Method: This study systematically investigates the impact of activation functions on fNIRS signal decoding performance across four representative architectures—fNIRSNet, AbsoluteNet, MDNN, and ShallowConvNet—under standardized preprocessing and training protocols. Contribution/Results: We identify “symmetry” as a critical activation function property uniquely suited to fNIRS data—a novel insight empirically validated for the first time. Tanh and Abs(x) consistently outperform ReLU, especially in shallow networks. Building on this, we propose the Modified Absolute Function (MAF), a symmetry-preserving activation. Experimental results demonstrate that symmetric activations yield average accuracy improvements of 2.3–4.1% across datasets and models. This work establishes both theoretical justification and practical guidelines for activation function selection in fNIRS-based brain–computer interfaces.

Technology Category

Application Category

📝 Abstract
Activation functions are critical to the performance of deep neural networks, particularly in domains such as functional near-infrared spectroscopy (fNIRS), where nonlinearity, low signal-to-noise ratio (SNR), and signal variability poses significant challenges to model accuracy. However, the impact of activation functions on deep learning (DL) performance in the fNIRS domain remains underexplored and lacks systematic investigation in the current literature. This study evaluates a range of conventional and field-specific activation functions for fNIRS classification tasks using multiple deep learning architectures, including the domain-specific fNIRSNet, AbsoluteNet, MDNN, and shallowConvNet (as the baseline), all tested on a single dataset recorded during an auditory task. To ensure fair a comparison, all networks were trained and tested using standardized preprocessing and consistent training parameters. The results show that symmetrical activation functions such as Tanh and the Absolute value function Abs(x) can outperform commonly used functions like the Rectified Linear Unit (ReLU), depending on the architecture. Additionally, a focused analysis of the role of symmetry was conducted using a Modified Absolute Function (MAF), with results further supporting the effectiveness of symmetrical activation functions on performance gains. These findings underscore the importance of selecting proper activation functions that align with the signal characteristics of fNIRS data.
Problem

Research questions and friction points this paper is trying to address.

Evaluating activation functions for fNIRS classification tasks
Assessing impact of symmetrical functions on deep learning performance
Improving model accuracy in low SNR fNIRS data
Innovation

Methods, ideas, or system contributions that make the work stand out.

Evaluates activation functions for fNIRS classification
Uses symmetrical functions like Tanh and Abs(x)
Tests domain-specific deep learning architectures
🔎 Similar Papers
No similar papers found.
B
Behtom Adeli
Department of Electrical, Computer, & Biomedical Engineering, University of Rhode Island, Kingston, RI, USA
J
John McLinden
Department of Electrical, Computer, & Biomedical Engineering, University of Rhode Island, Kingston, RI, USA
Pankaj Pandey
Pankaj Pandey
St. Jude Children’s Research Hospital, Memphis, TN, USA
Ming Shao
Ming Shao
UMass Lowell (Associate Professor)
Machine LearningData MiningComputer Vision
Y
Yalda Shahriari
Department of Electrical, Computer, & Biomedical Engineering, University of Rhode Island, Kingston, RI, USA