Identifying Connectivity Distributions from Neural Dynamics Using Flows

📅 2026-03-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Inferring connectivity structure from neural population activity is inherently non-unique, as multiple distinct connectivity patterns can generate identical dynamics. To address this ambiguity, this work proposes a novel approach grounded in the principle of maximum entropy and continuous normalizing flows (CNFs), which learns a distribution over connection weights consistent with observed dynamics rather than seeking a single deterministic solution. By reframing circuit inference as the identification of the distribution of connections necessary for computation, the method circumvents spurious structures arising from underconstrained inverse problems and captures complex empirical features such as heavy-tailed weight distributions. Experiments on synthetic data—including multistable systems, limit cycles, and ring attractors—as well as recordings from rat prefrontal cortex during decision-making tasks demonstrate that the approach accurately recovers the essential connectivity structures compatible with the underlying neural dynamics.
📝 Abstract
Connectivity structure shapes neural computation, but inferring this structure from population recordings is degenerate: multiple connectivity structures can generate identical dynamics. Recent work uses low-rank recurrent neural networks (lrRNNs) to infer low-dimensional latent dynamics and connectivity structure from observed activity, enabling a mechanistic interpretation of the dynamics. However, standard approaches for training lrRNNs can recover spurious structures irrelevant to the underlying dynamics. We first characterize the identifiability of connectivity structures in lrRNNs and determine conditions under which a unique solution exists. Then, to find such solutions, we develop an inference framework based on maximum entropy and continuous normalizing flows (CNFs), trained via flow matching. Instead of estimating a single connectivity matrix, our method learns the maximally unbiased distribution over connection weights consistent with observed dynamics. This approach captures complex yet necessary distributions such as heavy-tailed connectivity found in empirical data. We validate our method on synthetic datasets with connectivity structures that generate multistable attractors, limit cycles, and ring attractors, and demonstrate its applicability in recordings from rat frontal cortex during decision-making. Our framework shifts circuit inference from recovering connectivity to identifying which connectivity structures are computationally required, and which are artifacts of underconstrained inference.
Problem

Research questions and friction points this paper is trying to address.

connectivity inference
neural dynamics
identifiability
degeneracy
latent dynamics
Innovation

Methods, ideas, or system contributions that make the work stand out.

continuous normalizing flows
maximum entropy
low-rank RNNs
connectivity inference
neural dynamics
🔎 Similar Papers
No similar papers found.
T
Timothy Doyeon Kim
1Allen Institute, Seattle, WA, USA 2University of Washington, Seattle, WA, USA
U
Ulises Pereira-Obilinovic
1Allen Institute, Seattle, WA, USA
Y
Yiliu Wang
1Allen Institute, Seattle, WA, USA 2University of Washington, Seattle, WA, USA
E
Eric Shea-Brown
2University of Washington, Seattle, WA, USA
Uygar Sümbül
Uygar Sümbül
Allen Institute for Brain Science