Self-Supervised Foundation Model for Calcium-imaging Population Dynamics

📅 2026-04-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing calcium imaging analysis methods are typically task-specific and lack transferability across diverse applications. This work proposes CalM, a self-supervised foundation model tailored for calcium signals, which constructs a shared discrete vocabulary via a high-fidelity quantizer and employs a dual-axis autoregressive Transformer to jointly model dependencies across both neuronal and temporal dimensions. CalM introduces the first self-supervised pretraining paradigm specifically designed for calcium imaging data, achieving substantial performance gains over task-specific baselines on large-scale, multi-animal datasets. The model supports a range of downstream tasks—including prediction and neural decoding—and enables interpretable insights into functional neural structures through linear representation analysis.
📝 Abstract
Recent work suggests that large-scale, multi-animal modeling can significantly improve neural recording analysis. However, for functional calcium traces, existing approaches remain task-specific, limiting transfer across common neuroscience objectives. To address this challenge, we propose \textbf{CalM}, a self-supervised neural foundation model trained solely on neuronal calcium traces and adaptable to multiple downstream tasks, including forecasting and decoding. Our key contribution is a pretraining framework, composed of a high-performance tokenizer mapping single-neuron traces into a shared discrete vocabulary, and a dual-axis autoregressive transformer modeling dependencies along both the neural and the temporal axis. We evaluate CalM on a large-scale, multi-animal, multi-session dataset. On the neural population dynamics forecasting task, CalM outperforms strong specialized baselines after pretraining. With a task-specific head, CalM further adapts to the behavior decoding task and achieves superior results compared with supervised decoding models. Moreover, linear analyses of CalM representations reveal interpretable functional structures beyond predictive accuracy. Taken together, we propose a novel and effective self-supervised pretraining paradigm for foundation models based on calcium traces, paving the way for scalable pretraining and broad applications in functional neural analysis. Code will be released soon.
Problem

Research questions and friction points this paper is trying to address.

calcium imaging
neural population dynamics
foundation model
self-supervised learning
cross-task transfer
Innovation

Methods, ideas, or system contributions that make the work stand out.

self-supervised learning
foundation model
calcium imaging
dual-axis transformer
neural population dynamics
🔎 Similar Papers
No similar papers found.
X
Xinhong Xu
Department of Automation, Tsinghua University, Beijing, China
Yimeng Zhang
Yimeng Zhang
Beijing University of Posts and Telecommunications
Semantic communications
Q
Qichen Qian
School of Life Sciences, Tsinghua University, Beijing, China
Yuanlong Zhang
Yuanlong Zhang
Tsinghua University
Biophotonics