🤖 AI Summary
To address the challenge of accurately quantifying nonlinear dependencies in low-sample-size, low signal-to-noise ratio (SNR) biological time series—such as fMRI, physiological, and behavioral data—this paper introduces *concurrence*, a model-agnostic criterion that empirically measures statistical dependence via classifier-based alignment discriminability: a binary classifier is trained to distinguish whether two temporal segments are temporally aligned, and alignment separability serves as a proxy for dependence. Concurrence requires no parametric assumptions, prior knowledge, or hyperparameter tuning, and imposes neither large-sample nor linearity constraints. Crucially, it is theoretically proven to be equivalent to strict statistical dependence under mild regularity conditions. Evaluated on multimodal neurophysiological datasets, concurrence consistently outperforms established benchmarks—including Pearson correlation, Granger causality, and the Hilbert–Schmidt Independence Criterion (HSIC)—achieving over 35% improvement in detection power for sequences shorter than 500 time points and under low-SNR conditions.
📝 Abstract
Measuring the statistical dependence between observed signals is a primary tool for scientific discovery. However, biological systems often exhibit complex non-linear interactions that currently cannot be captured without a priori knowledge or large datasets. We introduce a criterion for dependence, whereby two time series are deemed dependent if one can construct a classifier that distinguishes between temporally aligned vs. misaligned segments extracted from them. We show that this criterion, concurrence, is theoretically linked with dependence, and can become a standard approach for scientific analyses across disciplines, as it can expose relationships across a wide spectrum of signals (fMRI, physiological and behavioral data) without ad-hoc parameter tuning or large amounts of data.