🤖 AI Summary
This study investigates the sensitivity of causal discovery methods to mismatches between observed sampling times and true event times. By systematically analyzing how varying sampling rates and observation window lengths affect causal inference performance—and integrating insights from signal processing theory—the work reveals, for the first time, distinct stability characteristics between classical and modern causal discovery algorithms under changes in sampling hyperparameters. Combining theoretical analysis with empirical evaluation, the research demonstrates that the performance of several mainstream methods is highly dependent on sampling configurations. These findings provide novel theoretical foundations and practical guidance for temporal causal modeling, highlighting the critical role of sampling design in reliable causal inference from time-series data.
📝 Abstract
Causal discovery problems use a set of observations to deduce causality between variables in the real world, typically to answer questions about biological or physical systems. These observations are often recorded at regular time intervals, determined by a user or a machine, depending on the experiment design. There is generally no guarantee that the timing of these recordings matches the timing of the underlying biological or physical events. In this paper, we examine the sensitivity of causal discovery methods to this potential mismatch. We consider empirical and theoretical evidence to understand how causal discovery performance is impacted by changes of sampling rate and window length. We demonstrate that both classical and recent causal discovery methods exhibit sensitivity to these hyperparameters, and we discuss how ideas from signal processing may help us understand these phenomena.