When predict can also explain: few-shot prediction to select better neural latents

📅 2024-05-23
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Neural latent variable modeling suffers from the absence of ground-truth labels and the difficulty of quantifying fidelity to true underlying neural dynamics. Method: We propose few-shot co-smoothing (FSCS) as a novel predictive benchmark—replacing conventional co-smoothing metrics—to guide model selection toward more parsimonious and neurophysiologically plausible latent dynamics. We introduce FSCS into the model selection pipeline for the first time and design a cross-model latent-space decoding consistency validation paradigm; integrating an HMM-based teacher–student framework, we conduct regression-style few-shot prediction evaluation on leading models including LFADS and STNDT. Results: Models achieving higher FSCS scores exhibit lower latent redundancy and stronger cross-model decoding consistency, significantly improving the fidelity of inferred latent dynamics. This establishes a generalizable, ground-truth-free evaluation paradigm for neural latent variable modeling.

Technology Category

Application Category

📝 Abstract
Latent variable models serve as powerful tools to infer underlying dynamics from observed neural activity. Ideally, the inferred dynamics should align with true ones. However, due to the absence of ground truth data, prediction benchmarks are often employed as proxies. One widely-used method, *co-smoothing*, involves jointly estimating latent variables and predicting observations along held-out channels to assess model performance. In this study, we reveal the limitations of the co-smoothing prediction framework and propose a remedy. In a student-teacher setup with Hidden Markov Models, we demonstrate that the high co-smoothing model space encompasses models with arbitrary extraneous dynamics in their latent representations. To address this, we introduce a secondary metric -- *few-shot co-smoothing*, performing regression from the latent variables to held-out channels in the data using fewer trials. Our results indicate that among models with near-optimal co-smoothing, those with extraneous dynamics underperform in the few-shot co-smoothing compared to 'minimal' models that are devoid of such dynamics. We provide analytical insights into the origin of this phenomenon and further validate our findings on real neural data using two state-of-the-art methods: LFADS and STNDT. In the absence of ground truth, we suggest a novel measure to validate our approach. By cross-decoding the latent variables of all model pairs with high co-smoothing, we identify models with minimal extraneous dynamics. We find a correlation between few-shot co-smoothing performance and this new measure. In summary, we present a novel prediction metric designed to yield latent variables that more accurately reflect the ground truth, offering a significant improvement for latent dynamics inference.
Problem

Research questions and friction points this paper is trying to address.

Improve neural latent dynamics inference
Identify models with minimal extraneous dynamics
Validate latent variables without ground truth
Innovation

Methods, ideas, or system contributions that make the work stand out.

Few-shot co-smoothing metric
Cross-decoding latent variables
Minimal extraneous dynamics
🔎 Similar Papers
No similar papers found.
K
Kabir Dabholkar
Faculty of Mathematics, Technion - Israel Institute of Technology
O
Omri Barak
Rappaport Faculty of Medicine and Network Biology Research Laboratory, Technion - Israel Institute of Technology