Training-Free Time Series Classification via In-Context Reasoning with LLM Agents

📅 2025-10-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the scarcity of labeled data and the suboptimal performance of zero-shot methods in time series classification (TSC), this paper proposes FETA—a training-free, multi-agent framework. FETA introduces a novel three-stage collaborative mechanism: channel-wise decomposition, exemplar retrieval, and large language model (LLM)-based reasoning. It retrieves intra-class prototypical samples via structural similarity matching at the channel level, then leverages LLM contextual reasoning with confidence-weighted fusion to achieve high-accuracy zero-shot classification—while preserving controllable input length. Fully parameter-free, FETA offers strong interpretability and plug-and-play deployment. Evaluated on nine UEA benchmark datasets, FETA significantly outperforms state-of-the-art trainable baselines, establishing new state-of-the-art performance for zero-shot TSC.

Technology Category

Application Category

📝 Abstract
Time series classification (TSC) spans diverse application scenarios, yet labeled data are often scarce, making task-specific training costly and inflexible. Recent reasoning-oriented large language models (LLMs) show promise in understanding temporal patterns, but purely zero-shot usage remains suboptimal. We propose FETA, a multi-agent framework for training-free TSC via exemplar-based in-context reasoning. FETA decomposes a multivariate series into channel-wise subproblems, retrieves a few structurally similar labeled examples for each channel, and leverages a reasoning LLM to compare the query against these exemplars, producing channel-level labels with self-assessed confidences; a confidence-weighted aggregator then fuses all channel decisions. This design eliminates the need for pretraining or fine-tuning, improves efficiency by pruning irrelevant channels and controlling input length, and enhances interpretability through exemplar grounding and confidence estimation. On nine challenging UEA datasets, FETA achieves strong accuracy under a fully training-free setting, surpassing multiple trained baselines. These results demonstrate that a multi-agent in-context reasoning framework can transform LLMs into competitive, plug-and-play TSC solvers without any parameter training. The code is available at https://github.com/SongyuanSui/FETATSC.
Problem

Research questions and friction points this paper is trying to address.

Classifying time series without training using in-context reasoning
Addressing scarce labeled data via multi-agent exemplar comparison
Eliminating pretraining needs while maintaining competitive accuracy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-agent framework enables training-free time series classification
Decomposes multivariate series into channel-wise subproblems for analysis
Uses confidence-weighted aggregation to fuse channel-level decisions
🔎 Similar Papers
No similar papers found.