ManifoldFormer: Geometric Deep Learning for Neural Dynamics on Riemannian Manifolds

📅 2025-11-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing EEG foundation models treat neural signals as Euclidean time series, neglecting their intrinsic low-dimensional Riemannian manifold structure—leading to suboptimal representation quality and limited cross-subject generalization. To address this, we propose the first geometry-aware foundation model framework for EEG: it integrates a Riemannian variational autoencoder to learn the latent manifold space, designs a geodesic-aware Transformer attention mechanism, and employs neural ordinary differential equations (neural ODEs) to model dynamical evolution on the manifold. By jointly learning geometric structure and temporal dynamics within the manifold space, our method enables intrinsic modeling of non-Euclidean neural data. Evaluated on four public EEG datasets, it achieves 4.6–4.8% higher classification accuracy and 6.2–10.2% improvement in Cohen’s Kappa over state-of-the-art methods. Moreover, it uncovers physiologically plausible, interpretable brain activity patterns aligned with established neurophysiological principles.

Technology Category

Application Category

📝 Abstract
Existing EEG foundation models mainly treat neural signals as generic time series in Euclidean space, ignoring the intrinsic geometric structure of neural dynamics that constrains brain activity to low-dimensional manifolds. This fundamental mismatch between model assumptions and neural geometry limits representation quality and cross-subject generalization. ManifoldFormer addresses this limitation through a novel geometric deep learning framework that explicitly learns neural manifold representations. The architecture integrates three key innovations: a Riemannian VAE for manifold embedding that preserves geometric structure, a geometric Transformer with geodesic-aware attention mechanisms operating directly on neural manifolds, and a dynamics predictor leveraging neural ODEs for manifold-constrained temporal evolution. Extensive evaluation across four public datasets demonstrates substantial improvements over state-of-the-art methods, with 4.6-4.8% higher accuracy and 6.2-10.2% higher Cohen's Kappa, while maintaining robust cross-subject generalization. The geometric approach reveals meaningful neural patterns consistent with neurophysiological principles, establishing geometric constraints as essential for effective EEG foundation models.
Problem

Research questions and friction points this paper is trying to address.

Modeling neural dynamics on intrinsic Riemannian manifolds from EEG signals
Overcoming Euclidean space limitations for cross-subject generalization
Developing geometric deep learning with manifold-preserving architecture components
Innovation

Methods, ideas, or system contributions that make the work stand out.

Riemannian VAE for manifold embedding
Geometric Transformer with geodesic attention
Neural ODEs for manifold dynamics prediction
🔎 Similar Papers
No similar papers found.