Context-Aware Probabilistic Modeling with LLM for Multimodal Time Series Forecasting

📅 2025-05-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing time-series forecasting methods struggle to effectively integrate exogenous textual information and are incompatible with the probabilistic generation paradigm of large language models (LLMs), limiting contextual awareness and uncertainty quantification. To address this, we propose Text2TS—a text-guided multimodal probabilistic forecasting framework—introducing the first “frozen-LLM-driven Mixture of Distribution Experts” (MoDE) architecture. Text2TS jointly leverages a learnable text–time-series alignment module and a pretrained time-series encoder to enable text-enhanced long-horizon distributional forecasting, without LLM fine-tuning. This design ensures both semantic comprehension and probabilistic consistency. Evaluated across diverse domains, Text2TS significantly improves point and quantile forecasting accuracy, demonstrating robust distributional modeling even under low-data regimes. Our work establishes a novel paradigm for joint text–time-series modeling.

Technology Category

Application Category

📝 Abstract
Time series forecasting is important for applications spanning energy markets, climate analysis, and traffic management. However, existing methods struggle to effectively integrate exogenous texts and align them with the probabilistic nature of large language models (LLMs). Current approaches either employ shallow text-time series fusion via basic prompts or rely on deterministic numerical decoding that conflict with LLMs' token-generation paradigm, which limits contextual awareness and distribution modeling. To address these limitations, we propose CAPTime, a context-aware probabilistic multimodal time series forecasting method that leverages text-informed abstraction and autoregressive LLM decoding. Our method first encodes temporal patterns using a pretrained time series encoder, then aligns them with textual contexts via learnable interactions to produce joint multimodal representations. By combining a mixture of distribution experts with frozen LLMs, we enable context-aware probabilistic forecasting while preserving LLMs' inherent distribution modeling capabilities. Experiments on diverse time series forecasting tasks demonstrate the superior accuracy and generalization of CAPTime, particularly in multimodal scenarios. Additional analysis highlights its robustness in data-scarce scenarios through hybrid probabilistic decoding.
Problem

Research questions and friction points this paper is trying to address.

Integrating exogenous texts with probabilistic LLMs for forecasting
Aligning shallow text-time series fusion with LLM token-generation
Enhancing contextual awareness in multimodal time series prediction
Innovation

Methods, ideas, or system contributions that make the work stand out.

Leverages text-informed abstraction and autoregressive LLM decoding
Aligns temporal patterns with textual contexts via learnable interactions
Combines mixture of distribution experts with frozen LLMs
🔎 Similar Papers
No similar papers found.