Rethinking Zero-Shot Time Series Classification: From Task-specific Classifiers to In-Context Inference

📅 2026-01-31
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses a critical limitation in existing zero-shot time series classification methods, which rely on task-specific classifiers that violate the no-training deployment principle and introduce evaluation bias. To overcome this, the authors propose TIC-FM, the first framework to incorporate in-context learning into zero-shot time series classification. TIC-FM performs prediction in a single forward pass by treating labeled examples as contextual prompts, requiring no parameter updates. Theoretical analysis demonstrates that this approach can emulate gradient-based training dynamics, effectively replacing conventional classifiers. The framework integrates a time series encoder, a lightweight projection adapter, and a chunk-wise masked latent memory Transformer to enable efficient context-aware inference. Evaluated on 128 UCR datasets, TIC-FM achieves state-of-the-art performance, with particularly significant gains in extremely low-label scenarios.

Technology Category

Application Category

📝 Abstract
The zero-shot evaluation of time series foundation models (TSFMs) for classification typically uses a frozen encoder followed by a task-specific classifier. However, this practice violates the training-free premise of zero-shot deployment and introduces evaluation bias due to classifier-dependent training choices. To address this issue, we propose TIC-FM, an in-context learning framework that treats the labeled training set as context and predicts labels for all test instances in a single forward pass, without parameter updates. TIC-FM pairs a time series encoder and a lightweight projection adapter with a split-masked latent memory Transformer. We further provide theoretical justification that in-context inference can subsume trained classifiers and can emulate gradient-based classifier training within a single forward pass. Experiments on 128 UCR datasets show strong accuracy, with consistent gains in the extreme low-label situation, highlighting training-free transfer
Problem

Research questions and friction points this paper is trying to address.

zero-shot
time series classification
evaluation bias
task-specific classifier
training-free
Innovation

Methods, ideas, or system contributions that make the work stand out.

in-context learning
zero-shot time series classification
foundation models
training-free inference
latent memory Transformer
🔎 Similar Papers
No similar papers found.