🤖 AI Summary
This work addresses the challenge of poor decoder generalization in implantable brain–computer interfaces caused by nonstationary neural activity across sessions, particularly when only limited data are available from the target session for effective adaptation. To tackle this issue, the authors propose a Task-Conditioned Latent-space Alignment (TCLA) framework. TCLA leverages an autoencoder to learn low-dimensional representations of neural dynamics from a source session and introduces, for the first time, a task-conditioning mechanism to align the latent spaces between source and target sessions, thereby enabling efficient knowledge transfer. Evaluated on macaque motor and eye-movement datasets, TCLA substantially outperforms baseline methods that rely solely on target-session data, achieving up to a 0.386 improvement in the coefficient of determination (R²) for y-axis velocity decoding, thus significantly enhancing the robustness and generalization of cross-session decoding performance.
📝 Abstract
Cross-session nonstationarity in neural activity recorded by implanted electrodes is a major challenge for invasive Brain-computer interfaces (BCIs), as decoders trained on data from one session often fail to generalize to subsequent sessions. This issue is further exacerbated in practice, as retraining or adapting decoders becomes particularly challenging when only limited data are available from a new session. To address this challenge, we propose a Task-Conditioned Latent Alignment framework (TCLA) for cross-session neural decoding. Building upon an autoencoder architecture, TCLA first learns a low-dimensional representation of neural dynamics from a source session with sufficient data. For target sessions with limited data, TCLA then aligns target latent representations to the source in a task-conditioned manner, enabling effective transfer of learned neural dynamics. We evaluate TCLA on the macaque motor and oculomotor center-out dataset. Compared to baseline methods trained solely on target-session data, TCLA consistently improves decoding performance across datasets and decoding settings, with gains in the coefficient of determination of up to 0.386 for y coordinate velocity decoding in a motor dataset. These results suggest that TCLA provides an effective strategy for transferring knowledge from source to target sessions, enabling more robust neural decoding under conditions with limited data.