🤖 AI Summary
The “black-box” nature of deep learning severely limits its scientific applicability in discovering governing equations of graph dynamical systems—particularly where interpretability and expressive power must be jointly ensured. To address this, we conduct a systematic evaluation of symbolic regression methods and propose Graph-KAN, a graph-structured Kolmogorov–Arnold Network. Graph-KAN explicitly models nonlinear node- and edge-level interactions via learnable, piecewise-smooth activation functions, thereby enhancing model transparency and yielding more concise, analytically tractable equations. On both synthetic and real-world graph dynamical datasets, Graph-KAN and its MLP variant outperform existing baselines; notably, Graph-KAN achieves superior physical interpretability with fewer parameters, accurately recovering closed-form governing equations for diverse complex systems—including diffusion, synchronization, and contagion processes. This work establishes a new paradigm for interpretable scientific discovery in graph-based dynamical modeling.
📝 Abstract
The ``black-box'' nature of deep learning models presents a significant barrier to their adoption for scientific discovery, where interpretability is paramount. This challenge is especially pronounced in discovering the governing equations of dynamical processes on networks or graphs, since even their topological structure further affects the processes' behavior. This paper provides a rigorous, comparative assessment of state-of-the-art symbolic regression techniques for this task. We evaluate established methods, including sparse regression and MLP-based architectures, and introduce a novel adaptation of Kolmogorov-Arnold Networks (KANs) for graphs, designed to exploit their inherent interpretability. Across a suite of synthetic and real-world dynamical systems, our results demonstrate that both MLP and KAN-based architectures can successfully identify the underlying symbolic equations, significantly surpassing existing baselines. Critically, we show that KANs achieve this performance with greater parsimony and transparency, as their learnable activation functions provide a clearer mapping to the true physical dynamics. This study offers a practical guide for researchers, clarifying the trade-offs between model expressivity and interpretability, and establishes the viability of neural-based architectures for robust scientific discovery on complex systems.