🤖 AI Summary
This work addresses the lack of systematic evaluation of large language models (LLMs) on multi-step time-series reasoning and complex temporal tasks—such as constrained forecasting and anomaly detection—by introducing TSAIA, the first comprehensive benchmark for time-series AI assistants. Methodologically, we propose a dynamic, extensible question-generation framework that synthesizes heterogeneous tasks from real-world datasets and domain literature, coupled with task-specific success criteria and a multidimensional quality evaluation metric suite. Experimental assessment of eight state-of-the-art LLMs reveals substantial limitations in compositional time-series reasoning and end-to-end analytical pipeline construction. To foster reproducibility and extensibility, we fully open-source the benchmark data, implementation code, and evaluation framework—establishing a foundational, scalable infrastructure for evaluating and advancing LLM-based time-series intelligence.
📝 Abstract
The rapid advancement of Large Language Models (LLMs) has sparked growing interest in their application to time series analysis tasks. However, their ability to perform complex reasoning over temporal data in real-world application domains remains underexplored. To move toward this goal, a first step is to establish a rigorous benchmark dataset for evaluation. In this work, we introduce the TSAIA Benchmark, a first attempt to evaluate LLMs as time-series AI assistants. To ensure both scientific rigor and practical relevance, we surveyed over 20 academic publications and identified 33 real-world task formulations. The benchmark encompasses a broad spectrum of challenges, ranging from constraint-aware forecasting to anomaly detection with threshold calibration: tasks that require compositional reasoning and multi-step time series analysis. The question generator is designed to be dynamic and extensible, supporting continuous expansion as new datasets or task types are introduced. Given the heterogeneous nature of the tasks, we adopt task-specific success criteria and tailored inference-quality metrics to ensure meaningful evaluation for each task. We apply this benchmark to assess eight state-of-the-art LLMs under a unified evaluation protocol. Our analysis reveals limitations in current models' ability to assemble complex time series analysis workflows, underscoring the need for specialized methodologies for domain-specific adaptation. Our benchmark is available at https://huggingface.co/datasets/Melady/TSAIA, and the code is available at https://github.com/USC-Melady/TSAIA.