🤖 AI Summary
Existing EEG foundation models treat neural signals as Euclidean time series, neglecting their intrinsic low-dimensional Riemannian manifold structure—leading to suboptimal representation quality and limited cross-subject generalization. To address this, we propose the first geometry-aware foundation model framework for EEG: it integrates a Riemannian variational autoencoder to learn the latent manifold space, designs a geodesic-aware Transformer attention mechanism, and employs neural ordinary differential equations (neural ODEs) to model dynamical evolution on the manifold. By jointly learning geometric structure and temporal dynamics within the manifold space, our method enables intrinsic modeling of non-Euclidean neural data. Evaluated on four public EEG datasets, it achieves 4.6–4.8% higher classification accuracy and 6.2–10.2% improvement in Cohen’s Kappa over state-of-the-art methods. Moreover, it uncovers physiologically plausible, interpretable brain activity patterns aligned with established neurophysiological principles.
📝 Abstract
Existing EEG foundation models mainly treat neural signals as generic time series in Euclidean space, ignoring the intrinsic geometric structure of neural dynamics that constrains brain activity to low-dimensional manifolds. This fundamental mismatch between model assumptions and neural geometry limits representation quality and cross-subject generalization. ManifoldFormer addresses this limitation through a novel geometric deep learning framework that explicitly learns neural manifold representations. The architecture integrates three key innovations: a Riemannian VAE for manifold embedding that preserves geometric structure, a geometric Transformer with geodesic-aware attention mechanisms operating directly on neural manifolds, and a dynamics predictor leveraging neural ODEs for manifold-constrained temporal evolution. Extensive evaluation across four public datasets demonstrates substantial improvements over state-of-the-art methods, with 4.6-4.8% higher accuracy and 6.2-10.2% higher Cohen's Kappa, while maintaining robust cross-subject generalization. The geometric approach reveals meaningful neural patterns consistent with neurophysiological principles, establishing geometric constraints as essential for effective EEG foundation models.