Coupled Transformer Autoencoder for Disentangling Multi-Region Neural Latent Dynamics

📅 2025-10-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In multi-regional neural recordings, shared and region-specific dynamics are entangled and exhibit nonstationarity, nonlinearity, and temporal dependencies; existing alignment methods neglect temporal structure, while dynamic latent variable models are constrained by single-region assumptions, linear readouts, or ambiguity between shared and private signals. Method: We propose a Coupled Transformer Autoencoder framework—the first to apply Transformers for orthogonal latent-space decomposition of multi-regional neural data—explicitly separating shared and private dynamic representations while jointly modeling long-range temporal dependencies and nonlinear neural dynamics. Orthogonality constraints, nonlinear state evolution, and multi-view alignment enable disentangled modeling of high-dimensional nonstationary sequences. Contribution/Results: Evaluated on two multi-regional electrophysiological datasets, our method significantly improves behavioral decoding performance and uncovers interpretable cross-regional coordination patterns alongside region-specific neural dynamics.

Technology Category

Application Category

📝 Abstract
Simultaneous recordings from thousands of neurons across multiple brain areas reveal rich mixtures of activity that are shared between regions and dynamics that are unique to each region. Existing alignment or multi-view methods neglect temporal structure, whereas dynamical latent variable models capture temporal dependencies but are usually restricted to a single area, assume linear read-outs, or conflate shared and private signals. We introduce the Coupled Transformer Autoencoder (CTAE) - a sequence model that addresses both (i) non-stationary, non-linear dynamics and (ii) separation of shared versus region-specific structure in a single framework. CTAE employs transformer encoders and decoders to capture long-range neural dynamics and explicitly partitions each region's latent space into orthogonal shared and private subspaces. We demonstrate the effectiveness of CTAE on two high-density electrophysiology datasets with simultaneous recordings from multiple regions, one from motor cortical areas and the other from sensory areas. CTAE extracts meaningful representations that better decode behavioral variables compared to existing approaches.
Problem

Research questions and friction points this paper is trying to address.

Modeling non-linear neural dynamics across brain regions
Disentangling shared versus region-specific neural signals
Capturing long-range temporal dependencies in multi-area recordings
Innovation

Methods, ideas, or system contributions that make the work stand out.

Transformer autoencoder captures long-range neural dynamics
Orthogonal subspaces separate shared and private signals
Handles non-stationary nonlinear multi-region neural data
🔎 Similar Papers
No similar papers found.