Towards Accurate and Interpretable Time-series Forecasting: A Polynomial Learning Approach

📅 2026-03-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing time series forecasting methods struggle to simultaneously achieve high prediction accuracy and feature-level interpretability, limiting user trust and the effectiveness of early warning systems. To address this challenge, this work proposes Interpretable Polynomial Learning (IPL), a novel approach that explicitly models raw features and their arbitrary-order interactions through a polynomial representation embedded directly within the model architecture. This design inherently preserves temporal dependencies while endowing the model with intrinsic interpretability. By adjusting the polynomial order, IPL offers a flexible trade-off between predictive accuracy and interpretability. Experimental results on synthetic data, Bitcoin prices, and real-world antenna measurements demonstrate that IPL not only maintains high forecasting accuracy but also significantly outperforms existing interpretable methods, enabling the construction of more concise and effective early warning mechanisms.

Technology Category

Application Category

📝 Abstract
Time series forecasting enables early warning and has driven asset performance management from traditional planned maintenance to predictive maintenance. However, the lack of interpretability in forecasting methods undermines users' trust and complicates debugging for developers. Consequently, interpretable time-series forecasting has attracted increasing research attention. Nevertheless, existing methods suffer from several limitations, including insufficient modeling of temporal dependencies, lack of feature-level interpretability to support early warning, and difficulty in simultaneously achieving the accuracy and interpretability. This paper proposes the interpretable polynomial learning (IPL) method, which integrates interpretability into the model structure by explicitly modeling original features and their interactions of arbitrary order through polynomial representations. This design preserves temporal dependencies, provides feature-level interpretability, and offers a flexible trade-off between prediction accuracy and interpretability by adjusting the polynomial degree. We evaluate IPL on simulated and Bitcoin price data, showing that it achieves high prediction accuracy with superior interpretability compared with widely used explainability methods. Experiments on field-collected antenna data further demonstrate that IPL yields simpler and more efficient early warning mechanisms.
Problem

Research questions and friction points this paper is trying to address.

time-series forecasting
interpretability
temporal dependencies
feature-level interpretability
accuracy-interpretability trade-off
Innovation

Methods, ideas, or system contributions that make the work stand out.

interpretable time-series forecasting
polynomial learning
feature-level interpretability
temporal dependencies
predictive maintenance
🔎 Similar Papers
No similar papers found.
B
Bo Liu
Center for Intelligent Decision-Making and Machine Learning, School of Management, Xi’an Jiaotong University, Xi’an, China; The 39th Research Institute of China Electronics Technology Group Corporation, Xi’an, China
Shao-Bo Lin
Shao-Bo Lin
Xi'an Jiaotong University
Learning theoryApproximation theoryAI in Management Science
C
Changmiao Wang
Shenzhen Research Institute of Big Data, ShenZhen, China
X
Xiaotong Liu
Center for Intelligent Decision-Making and Machine Learning, School of Management, Xi’an Jiaotong University, Xi’an, China