Towards Interpretable Time Series Foundation Models

📅 2025-07-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Lightweight instruction-tuned language models lack interpretable temporal reasoning capabilities due to insufficient understanding of time-series semantics. Method: We propose a “linguified knowledge distillation” paradigm: leveraging a large multimodal model (LMM) to automatically generate natural-language annotations—covering trend direction, noise intensity, extremum localization, etc.—for synthetically generated mean-reversion time series, and using these annotations to supervise fine-tuning of a compact Qwen model. Contributions/Results: (1) We introduce the first fine-grained time-series language understanding benchmark explicitly designed for interpretability evaluation; (2) we enable synthetic-data-driven distillation without reliance on ground-truth labels. Experiments demonstrate that the distilled small model significantly outperforms baselines across multiple time-series explanation tasks, while offering advantages for edge deployment and privacy preservation.

Technology Category

Application Category

📝 Abstract
In this paper, we investigate the distillation of time series reasoning capabilities into small, instruction-tuned language models as a step toward building interpretable time series foundation models. Leveraging a synthetic dataset of mean-reverting time series with systematically varied trends and noise levels, we generate natural language annotations using a large multimodal model and use these to supervise the fine-tuning of compact Qwen models. We introduce evaluation metrics that assess the quality of the distilled reasoning - focusing on trend direction, noise intensity, and extremum localization - and show that the post-trained models acquire meaningful interpretive capabilities. Our results highlight the feasibility of compressing time series understanding into lightweight, language-capable models suitable for on-device or privacy-sensitive deployment. This work contributes a concrete foundation toward developing small, interpretable models that explain temporal patterns in natural language.
Problem

Research questions and friction points this paper is trying to address.

Develop interpretable time series foundation models using small language models
Distill time series reasoning capabilities into lightweight, instruction-tuned models
Enable natural language explanations of temporal patterns for on-device deployment
Innovation

Methods, ideas, or system contributions that make the work stand out.

Distill time series reasoning into small language models
Use synthetic data and large models for supervision
Evaluate trend, noise, and extremum interpretation capabilities
🔎 Similar Papers
No similar papers found.
Matthieu Boileau
Matthieu Boileau
CNRS
P
Philippe Helluy
University of Strasbourg, France
J
Jeremy Pawlus
AxesSim, Strasbourg, France
Svitlana Vyetrenko
Svitlana Vyetrenko
J. P. Morgan AI Research
AI/ML