🤖 AI Summary
Deep time-series forecasting models face deployment limitations in safety-critical domains (e.g., autonomous driving, healthcare) due to their inherent lack of interpretability. To address this, we propose a symbolic, interpretable forecasting framework based on the Kolmogorov–Arnold Network (KAN)—the first application of KAN to time-series prediction. Our method achieves intrinsic interpretability through symbolically parameterized network architecture. We further introduce a prior-knowledge injection mechanism and a time-frequency collaborative representation learning strategy to ensure physical consistency while capturing multi-scale temporal dynamics. Extensive experiments on diverse real-world time-series benchmarks demonstrate that our approach maintains state-of-the-art prediction accuracy while generating human-understandable, symbolic decision rationales—thereby substantially enhancing model trustworthiness and practical deployability in high-stakes applications.
📝 Abstract
As time evolves, data within specific domains exhibit predictability that motivates time series forecasting to predict future trends from historical data. However, current deep forecasting methods can achieve promising performance but generally lack interpretability, hindering trustworthiness and practical deployment in safety-critical applications such as auto-driving and healthcare. In this paper, we propose a novel interpretable model, iTFKAN, for credible time series forecasting. iTFKAN enables further exploration of model decision rationales and underlying data patterns due to its interpretability achieved through model symbolization. Besides, iTFKAN develops two strategies, prior knowledge injection, and time-frequency synergy learning, to effectively guide model learning under complex intertwined time series data. Extensive experimental results demonstrated that iTFKAN can achieve promising forecasting performance while simultaneously possessing high interpretive capabilities.