Decoding Dynamic Visual Experience from Calcium Imaging via Cell-Pattern-Aware SSL

📅 2025-10-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Neural data exhibit high cellular heterogeneity and severe label scarcity, limiting the efficacy of self-supervised learning (SSL). To address this, we propose POYO-SSL: the first method to leverage neuronal activity predictability as a principled selection criterion—partitioning calcium imaging data into predictable and unpredictable neurons for pretraining and fine-tuning, respectively—and thereby transforming population-level heterogeneity into a modeling advantage. Our approach integrates higher-order statistics—specifically skewness and kurtosis—guided cell-type classification with a self-supervised pretraining–fine-tuning paradigm. Evaluated on the Allen Brain Observatory dataset, POYO-SSL achieves 12–13% accuracy gains over from-scratch training. Crucially, model performance scales robustly with increasing model size, avoiding the saturation or degradation observed in existing methods. This establishes a novel, scalable paradigm for building foundational models for neural decoding.

Technology Category

Application Category

📝 Abstract
Self-supervised learning (SSL) holds a great deal of promise for applications in neuroscience, due to the lack of large-scale, consistently labeled neural datasets. However, most neural datasets contain heterogeneous populations that mix stable, predictable cells with highly stochastic, stimulus-contingent ones, which has made it hard to identify consistent activity patterns during SSL. As a result, self-supervised pretraining has yet to show clear signs of benefits from scale on neural data. Here, we present a novel approach to self-supervised pretraining, POYO-SSL that exploits the heterogeneity of neural data to improve pre-training and achieve benefits of scale. Specifically, in POYO-SSL we pretrain only on predictable (statistically regular) neurons-identified on the pretraining split via simple higher-order statistics (skewness and kurtosis)-then we fine-tune on the unpredictable population for downstream tasks. On the Allen Brain Observatory dataset, this strategy yields approximately 12-13% relative gains over from-scratch training and exhibits smooth, monotonic scaling with model size. In contrast, existing state-of-the-art baselines plateau or destabilize as model size increases. By making predictability an explicit metric for crafting the data diet, POYO-SSL turns heterogeneity from a liability into an asset, providing a robust, biologically grounded recipe for scalable neural decoding and a path toward foundation models of neural dynamics.
Problem

Research questions and friction points this paper is trying to address.

Identifies predictable neurons using statistical regularity for decoding
Addresses neural heterogeneity by separating stable and stochastic cells
Enables scalable self-supervised learning on calcium imaging data
Innovation

Methods, ideas, or system contributions that make the work stand out.

Selects predictable neurons using statistical metrics
Pretrains only on predictable neuron subsets
Fine-tunes unpredictable neurons for downstream tasks
🔎 Similar Papers
No similar papers found.