🤖 AI Summary
Functional near-infrared spectroscopy (fNIRS) signals suffer from low signal-to-noise ratio and high inter-subject variability, posing significant challenges for deep learning–based classification. Method: This study systematically investigates the impact of activation functions on fNIRS signal decoding performance across four representative architectures—fNIRSNet, AbsoluteNet, MDNN, and ShallowConvNet—under standardized preprocessing and training protocols. Contribution/Results: We identify “symmetry” as a critical activation function property uniquely suited to fNIRS data—a novel insight empirically validated for the first time. Tanh and Abs(x) consistently outperform ReLU, especially in shallow networks. Building on this, we propose the Modified Absolute Function (MAF), a symmetry-preserving activation. Experimental results demonstrate that symmetric activations yield average accuracy improvements of 2.3–4.1% across datasets and models. This work establishes both theoretical justification and practical guidelines for activation function selection in fNIRS-based brain–computer interfaces.
📝 Abstract
Activation functions are critical to the performance of deep neural networks, particularly in domains such as functional near-infrared spectroscopy (fNIRS), where nonlinearity, low signal-to-noise ratio (SNR), and signal variability poses significant challenges to model accuracy. However, the impact of activation functions on deep learning (DL) performance in the fNIRS domain remains underexplored and lacks systematic investigation in the current literature. This study evaluates a range of conventional and field-specific activation functions for fNIRS classification tasks using multiple deep learning architectures, including the domain-specific fNIRSNet, AbsoluteNet, MDNN, and shallowConvNet (as the baseline), all tested on a single dataset recorded during an auditory task. To ensure fair a comparison, all networks were trained and tested using standardized preprocessing and consistent training parameters. The results show that symmetrical activation functions such as Tanh and the Absolute value function Abs(x) can outperform commonly used functions like the Rectified Linear Unit (ReLU), depending on the architecture. Additionally, a focused analysis of the role of symmetry was conducted using a Modified Absolute Function (MAF), with results further supporting the effectiveness of symmetrical activation functions on performance gains. These findings underscore the importance of selecting proper activation functions that align with the signal characteristics of fNIRS data.