🤖 AI Summary
This work addresses the challenge of extracting time-invariant and modality-agnostic intrinsic representations from multi-segment neuronal activity data. We introduce the novel concept of “Platonic intrinsic representations,” modeling each neuron as an autonomous system and explicitly decoupling neuron-specific idiosyncrasies from cross-sample invariant properties. Methodologically, we develop a contrastive learning framework built upon VICReg: positive pairs consist of temporally distinct activity segments from the same neuron, while negative pairs comprise segments from different neurons. The learned representations encode fundamental neuronal attributes—including molecular identity, spatial location, and morphological features. Evaluated on synthetic and real spatial transcriptomic and electrophysiological datasets, our model significantly improves accuracy in neuronal type and anatomical location prediction. Crucially, it demonstrates strong out-of-distribution generalization to unseen individuals—achieving, for the first time, time-robust representation learning at the single-neuron level.
📝 Abstract
The Platonic Representation Hypothesis suggests a universal, modality-independent reality representation behind different data modalities. Inspired by this, we view each neuron as a system and detect its multi-segment activity data under various peripheral conditions. We assume there's a time-invariant representation for the same neuron, reflecting its intrinsic properties like molecular profiles, location, and morphology. The goal of obtaining these intrinsic neuronal representations has two criteria: (I) segments from the same neuron should have more similar representations than those from different neurons; (II) the representations must generalize well to out-of-domain data. To meet these, we propose the NeurPIR (Neuron Platonic Intrinsic Representation) framework. It uses contrastive learning, with segments from the same neuron as positive pairs and those from different neurons as negative pairs. In implementation, we use VICReg, which focuses on positive pairs and separates dissimilar samples via regularization. We tested our method on Izhikevich model-simulated neuronal population dynamics data. The results accurately identified neuron types based on preset hyperparameters. We also applied it to two real-world neuron dynamics datasets with neuron type annotations from spatial transcriptomics and neuron locations. Our model's learned representations accurately predicted neuron types and locations and were robust on out-of-domain data (from unseen animals). This shows the potential of our approach for understanding neuronal systems and future neuroscience research.