🤖 AI Summary
Time-series forecasting faces three core challenges: instability of foundation models, lack of interpretability in model ensembles, and the inability of large language models (LLMs) to directly capture temporal causality. To address these, we propose the first interpretable, conversational forecasting framework that reconfigures an LLM as a “dialogic arbiter” with explicit temporal causal reasoning capability, dynamically orchestrating multi-model ensembles via iterative causal inference. Our method introduces SHAP-guided R1-style fine-tuning, rendering ensemble weights interpretable as causal statements about time-varying dynamics. Evaluated on GIFT-Eval—a comprehensive benchmark spanning 23 datasets and 97 forecasting configurations—our approach achieves new state-of-the-art performance, significantly outperforming existing methods on both CRPS and MASE metrics.
📝 Abstract
The proliferation of time series foundation models has created a landscape where no single method achieves consistent superiority, framing the central challenge not as finding the best model, but as orchestrating an optimal ensemble with interpretability. While Large Language Models (LLMs) offer powerful reasoning capabilities, their direct application to time series forecasting has proven ineffective. We address this gap by repositioning the LLM as an intelligent judge that evaluates, explains, and strategically coordinates an ensemble of foundation models. To overcome the LLM's inherent lack of domain-specific knowledge on time series, we introduce an R1-style finetuning process, guided by SHAP-based faithfulness scores, which teaches the model to interpret ensemble weights as meaningful causal statements about temporal dynamics. The trained agent then engages in iterative, multi-turn conversations to perform forward-looking assessments, provide causally-grounded explanations for its weighting decisions, and adaptively refine the optimization strategy. Validated on the GIFT-Eval benchmark on 23 datasets across 97 settings, our approach significantly outperforms leading time series foundation models on both CRPS and MASE metrics, establishing new state-of-the-art results.