🤖 AI Summary
Existing transfer entropy estimation methods suffer from the curse of dimensionality, strong distributional assumptions, and high sample complexity. This paper introduces score-based diffusion models—the first such application—to transfer entropy estimation. By parameterizing the score function with neural networks, our approach implicitly models high-dimensional conditional distributions, enabling end-to-end, distribution-free estimation of transfer entropy. Crucially, it circumvents explicit density modeling and alleviates reliance on large sample sizes, thereby significantly improving scalability and robustness for directed information flow estimation in high-dimensional time series. Extensive experiments on synthetic and real-world datasets demonstrate that our method consistently outperforms state-of-the-art neural estimators and classical approaches in accuracy, stability, and small-sample performance. The proposed framework establishes a novel paradigm for information flow analysis in complex dynamical systems.
📝 Abstract
Transfer entropy measures directed information flow in time series, and it has become a fundamental quantity in applications spanning neuroscience, finance, and complex systems analysis. However, existing estimation methods suffer from the curse of dimensionality, require restrictive distributional assumptions, or need exponentially large datasets for reliable convergence. We address these limitations in the literature by proposing TENDE (Transfer Entropy Neural Diffusion Estimation), a novel approach that leverages score-based diffusion models to estimate transfer entropy through conditional mutual information. By learning score functions of the relevant conditional distributions, TENDE provides flexible, scalable estimation while making minimal assumptions about the underlying data-generating process. We demonstrate superior accuracy and robustness compared to existing neural estimators and other state-of-the-art approaches across synthetic benchmarks and real data.