Nonlinear classification of neural manifolds with contextual information

📅 2024-05-10
🏛️ Physical Review E
📈 Citations: 2
Influential: 0
📄 PDF
🤖 AI Summary
How do neural systems perform nonlinear classification of neural manifolds using contextual information? Existing linear readout methods fail to capture context-dependent manifold reconfiguration mechanisms. Method: We derive, for the first time, an exact analytical formula for context-dependent manifold capacity, explicitly coupling manifold geometry with contextual relevance; integrating differential geometry, random matrix theory, and information geometry, we conduct cross-scale validation via synthetic data simulations and electrophysiological recordings from primate V1 and IT cortex. Contribution: We overcome the limitations of linear readouts by establishing an interpretable, nonlinear, context-sensitive theoretical framework for neural computation. For the first time, we quantitatively identify and model context-driven representational reformatting—previously undetectable by conventional methods—in the early layers of deep neural networks. This significantly advances our understanding of distributed neural coding mechanisms, revealing how contextual signals dynamically reshape population-level neural representations.

Technology Category

Application Category

📝 Abstract
Understanding how neural systems efficiently process information through distributed representations is a fundamental challenge at the interface of neuroscience and machine learning. Recent approaches analyze the statistical and geometrical attributes of neural representations as population-level mechanistic descriptors of task implementation. In particular, manifold capacity has emerged as a promising framework linking population geometry to the separability of neural manifolds. However, this metric has been limited to linear readouts. To address this limitation, we introduce a theoretical framework that leverages latent directions in input space, which can be related to contextual information. We derive an exact formula for the context-dependent manifold capacity that depends on manifold geometry and context correlations, and validate it on synthetic and real data. Our framework's increased expressivity captures representation reformatting in deep networks at early stages of the layer hierarchy, previously inaccessible to analysis. As context-dependent nonlinearity is ubiquitous in neural systems, our data-driven and theoretically grounded approach promises to elucidate context-dependent computation across scales, datasets, and models.
Problem

Research questions and friction points this paper is trying to address.

Extends manifold capacity to nonlinear readouts using contextual information
Links neural manifold geometry and context correlations to separability
Enables analysis of context-dependent computation in neural systems
Innovation

Methods, ideas, or system contributions that make the work stand out.

Leverages latent directions for nonlinear classification
Derives exact formula for context-dependent manifold capacity
Validates framework on synthetic and real data
🔎 Similar Papers
No similar papers found.