🤖 AI Summary
This paper addresses the challenge of effectively modeling asynchronous time series with large language models (LLMs). We propose an event-driven cross-task analytical framework. Methodologically, we introduce (1) a natural-language-based event encoding scheme specifically designed for asynchronous time series, which jointly maps irregular timestamps and event semantics into LLM-compatible textual inputs; and (2) stochastic soft prompting—a lightweight, weight-free adaptation technique that dynamically optimizes prompt representations without parameter fine-tuning, thereby circumventing restrictive alignment assumptions and obviating full-parameter or QLoRA-based fine-tuning. Our approach achieves state-of-the-art performance across three core tasks—forecasting, anomaly detection, and imputation—on multiple real-world asynchronous time-series benchmarks, consistently outperforming existing fine-tuning methods.
📝 Abstract
We present a novel prompt design for Large Language Models (LLMs) tailored to Asynchronous Time Series. Unlike regular time series, which assume values at evenly spaced time points, asynchronous time series consist of timestamped events occurring at irregular intervals, each described in natural language. Our approach effectively utilizes the rich natural language of event descriptions, allowing LLMs to benefit from their broad world knowledge for reasoning across different domains and tasks. This allows us to extend the scope of asynchronous time series analysis beyond forecasting to include tasks like anomaly detection and data imputation. We further introduce Stochastic Soft Prompting, a novel prompt-tuning mechanism that significantly improves model performance, outperforming existing fine-tuning methods such as QLoRA. Through extensive experiments on real world datasets, we demonstrate that our approach achieves state-of-the-art performance across different tasks and datasets.