LuMamba: Latent Unified Mamba for Electrode Topology-Invariant and Efficient EEG Modeling

📅 2026-03-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenges of limited generalizability and scalability in EEG modeling caused by electrode topology variations and the high computational complexity of Transformers. To overcome these issues, the authors propose LuMamba, a novel framework that integrates topology-invariant encoding with a linear-complexity state space model. It introduces the LUNA cross-attention mechanism to unify channel representations and designs FEMBA bidirectional Mamba blocks for efficient long-range temporal modeling. Notably, this study presents the first application of the LeJEPA (Latent Joint Embedding Predictive Architecture) to self-supervised learning of biosignals, enhancing representation robustness through a masked reconstruction objective. With only 4.6 million parameters, LuMamba achieves a balanced accuracy of 80.99% on TUAB and an AUPR of 0.97 for Alzheimer’s detection, while reducing FLOPS by 377× compared to state-of-the-art methods and enabling processing of sequences 12× longer.

Technology Category

Application Category

📝 Abstract
Electroencephalography (EEG) enables non-invasive monitoring of brain activity across clinical and neurotechnology applications, yet building foundation models for EEG remains challenging due to \emph{differing electrode topologies} and \emph{computational scalability}, as Transformer architectures incur quadratic sequence complexity. As a joint solution, we propose \textbf{LuMamba} (\textbf{L}atent \textbf{U}nified \textbf{Mamba}), a self-supervised framework combining topology-invariant encodings with linear-complexity state-space modeling, using LUNA's learned-query cross-attention mechanism for channel unification~\cite{luna}, and FEMBA's bidirectional Mamba blocks for efficient temporal modeling~\cite{femba}. Within this architecture, we provide the first systematic investigation of the Latent-Euclidean Joint-Embedding Predictive Architecture (LeJEPA) for biosignal learning. Pre-trained on over 21,000 hours of unlabeled EEG from the TUEG corpus, LuMamba is evaluated on five downstream tasks spanning abnormality detection, artifact recognition, and mental condition classification across electrode configurations ranging from 16 to 26 channels. In the pre-training objective, masked reconstruction alone yields structured but less generalizable representations, while LeJEPA alone produces diffuse embeddings; combining both objectives achieves the most robust performance. With only 4.6M parameters, LuMamba attains 80.99\% balanced accuracy on TUAB and achieves state-of-art performance on Alzheimer's detection (0.97 AUPR), while requiring \textbf{377$\times$ fewer FLOPS} than state-of-art models at equivalent sequence lengths and scaling to \textbf{12$\times$ longer sequences} before reaching typical GPU memory limits. Code is available at https://github.com/pulp-bio/biofoundation
Problem

Research questions and friction points this paper is trying to address.

electrode topology
computational scalability
EEG modeling
foundation models
sequence complexity
Innovation

Methods, ideas, or system contributions that make the work stand out.

Mamba
topology-invariant
state-space model
LeJEPA
self-supervised EEG
🔎 Similar Papers
No similar papers found.