TS-Haystack: A Multi-Scale Retrieval Benchmark for Time Series Language Models

📅 2026-02-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge that existing time series language models struggle to accurately retrieve local events within long-context sequences—such as million-point sensor streams spanning hours—due to training and evaluation protocols predominantly confined to short sequences. To this end, we introduce TS-Haystack, the first multi-scale long-context retrieval benchmark tailored for time series language models. Inspired by the “needle-in-a-haystack” paradigm, it embeds short-duration activity segments into accelerometer data to systematically evaluate model performance across four tasks: direct retrieval, temporal reasoning, multi-step reasoning, and contextual anomaly detection. Experiments reveal that while classification accuracy can be maintained or even improved under compression ratios up to 176×, retrieval performance degrades significantly with increasing context length, indicating that fine-grained temporal information is often lost during compression. This underscores the necessity of decoupling sequence length from computational complexity to preserve temporal fidelity.

Technology Category

Application Category

📝 Abstract
Time Series Language Models (TSLMs) are emerging as unified models for reasoning over continuous signals in natural language. However, long-context retrieval remains a major limitation: existing models are typically trained and evaluated on short sequences, while real-world time-series sensor streams can span millions of datapoints. This mismatch requires precise temporal localization under strict computational constraints, a regime that is not captured by current benchmarks. We introduce TS-Haystack, a long-context temporal retrieval benchmark comprising ten task types across four categories: direct retrieval, temporal reasoning, multi-step reasoning and contextual anomaly. The benchmark uses controlled needle insertion by embedding short activity bouts into longer longitudinal accelerometer recordings, enabling systematic evaluation across context lengths ranging from seconds to 2 hours per sample. We hypothesize that existing TSLM time series encoders overlook temporal granularity as context length increases, creating a task-dependent effect: compression aids classification but impairs retrieval of localized events. Across multiple model and encoding strategies, we observe a consistent divergence between classification and retrieval behavior. Learned latent compression preserves or improves classification accuracy at compression ratios up to 176$\times$, but retrieval performance degrades with context length, incurring in the loss of temporally localized information. These results highlight the importance of architectural designs that decouple sequence length from computational complexity while preserving temporal fidelity.
Problem

Research questions and friction points this paper is trying to address.

time series
long-context retrieval
temporal localization
language models
computational constraints
Innovation

Methods, ideas, or system contributions that make the work stand out.

time series language models
long-context retrieval
temporal localization
latent compression
multi-scale benchmark
🔎 Similar Papers
No similar papers found.
N
Nicolas Zumarraga
Agentic Systems Lab, ETH Zurich
T
Thomas Kaar
Agentic Systems Lab, ETH Zurich; Stanford Mussallem Center for Biodesign, Stanford University
N
Ning Wang
Agentic Systems Lab, ETH Zurich
M
Maxwell A. Xu
University of Illinois Urbana-Champaign; Google
M
Max Rosenblattl
Stanford Mussallem Center for Biodesign, Stanford University
Markus Kreft
Markus Kreft
ETH Zurich
machine learningenergy efficiencysmart gridelectric vehiclessustainability
K
Kevin O'Sullivan
Agentic Systems Lab, ETH Zurich
Paul Schmiedmayer
Paul Schmiedmayer
Stanford University
Digital HealthTSLMAISoftware EngineeringMobile Applications
P
Patrick Langer
Agentic Systems Lab, ETH Zurich; Stanford Mussallem Center for Biodesign, Stanford University; Centre for Digital Health Interventions, ETH Zurich
R
Robert Jakob
Agentic Systems Lab, ETH Zurich