🤖 AI Summary
Large inter-subject variability and scarce labeled data in brain–computer interfaces (BCIs) lead to time-consuming calibration. Method: We propose an end-to-end adaptive decoding framework featuring a lightweight convolutional neural network explicitly conditioned on subject identity to model individual dependencies; it integrates class-imbalanced optimization and automated hyperparameter tuning to enhance cross-subject generalization under low-calibration settings. The framework further supports interpretable representation visualization for improved model transparency. Results: Evaluated on time-modulated event-related potential classification, our method consistently outperforms state-of-the-art transfer learning and adaptive BCI approaches across multiple architectures. It achieves high decoding accuracy even with minimal calibration data—e.g., fewer than 20 trials per subject—demonstrating robustness in low-data regimes. This work establishes a novel paradigm for practical, scalable, and adaptive BCI systems.
📝 Abstract
Brain-Computer Interfaces (BCIs) suffer from high inter-subject variability and limited labeled data, often requiring lengthy calibration phases. In this work, we present an end-to-end approach that explicitly models the subject dependency using lightweight convolutional neural networks (CNNs) conditioned on the subject's identity. Our method integrates hyperparameter optimization strategies that prioritize class imbalance and evaluates two conditioning mechanisms to adapt pre-trained models to unseen subjects with minimal calibration data. We benchmark three lightweight architectures on a time-modulated Event-Related Potentials (ERP) classification task, providing interpretable evaluation metrics and explainable visualizations of the learned representations. Results demonstrate improved generalization and data-efficient calibration, highlighting the scalability and practicality of subject-adaptive BCIs.