Deep Optimal Transport for Domain Adaptation on SPD Manifolds

📅 2022-01-15
📈 Citations: 3
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses domain adaptation for cross-session covariance matrices—lying on the symmetric positive-definite (SPD) Riemannian manifold—in brain–computer interface (BCI) applications. Conventional approaches neglect the intrinsic Riemannian geometry of SPD manifolds, leading to geometrically inconsistent distribution alignment. To overcome this, we propose the first deep domain adaptation method that tightly integrates optimal transport theory with SPD manifold geometry. Specifically, we formulate both marginal and conditional distribution discrepancies using the Wasserstein distance and perform optimization via Riemannian gradient descent directly on the SPD manifold, thereby rigorously preserving the symmetric positive-definiteness of covariance matrices. Evaluated on three benchmark BCI datasets—KU, BNCI2014001, and BNCI2015001—our method achieves average classification accuracy improvements of 3.2–5.8% over state-of-the-art baselines. Embedding visualizations further confirm its geometric fidelity, demonstrating faithful preservation of manifold structure during adaptation.
📝 Abstract
Recent progress in geometric deep learning has drawn increasing attention from the machine learning community toward domain adaptation on symmetric positive definite (SPD) manifolds, especially for neuroimaging data that often suffer from distribution shifts across sessions. These data, typically represented as covariance matrices of brain signals, inherently lie on SPD manifolds due to their symmetry and positive definiteness. However, conventional domain adaptation methods often overlook this geometric structure when applied directly to covariance matrices, which can result in suboptimal performance. To address this issue, we introduce a new geometric deep learning framework that combines optimal transport theory with the geometry of SPD manifolds. Our approach aligns data distributions while respecting the manifold structure, effectively reducing both marginal and conditional discrepancies. We validate our method on three cross-session brain computer interface datasets, KU, BNCI2014001, and BNCI2015001, where it consistently outperforms baseline approaches while maintaining the intrinsic geometry of the data. We also provide quantitative results and visualizations to better illustrate the behavior of the learned embeddings.
Problem

Research questions and friction points this paper is trying to address.

Addressing distribution shifts in neuroimaging data on SPD manifolds
Aligning data distributions while preserving manifold geometry
Improving domain adaptation for cross-session brain-computer interfaces
Innovation

Methods, ideas, or system contributions that make the work stand out.

Combines optimal transport with SPD manifolds geometry
Aligns data distributions respecting manifold structure
Reduces marginal and conditional discrepancies effectively
🔎 Similar Papers