Teaching Time Series to See and Speak: Forecasting with Aligned Visual and Textual Perspectives

📅 2025-06-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Traditional time-series forecasting relies solely on unimodal numerical inputs, limiting its capacity to capture high-order semantics and human-perceptual intuition (e.g., visual patterns). To address this, we propose TS-MultiView, a multimodal contrastive learning framework that simultaneously encodes raw time series into structured visual representations and LLM-generated textual descriptions, enabling cross-modal alignment within a shared semantic space. A learnable variable selection module is further introduced to enhance interpretability and accuracy in multivariate forecasting. By avoiding discrete tokenization bottlenecks, TS-MultiView explicitly integrates visual intuition with linguistic semantics. Extensive experiments demonstrate state-of-the-art performance across 15 short-term and 6 long-term forecasting benchmarks, validating the efficacy of dual-view representation and contrastive alignment. The code is publicly available.

Technology Category

Application Category

📝 Abstract
Time series forecasting traditionally relies on unimodal numerical inputs, which often struggle to capture high-level semantic patterns due to their dense and unstructured nature. While recent approaches have explored representing time series as text using large language models (LLMs), these methods remain limited by the discrete nature of token sequences and lack the perceptual intuition humans typically apply, such as interpreting visual patterns. In this paper, we propose a multimodal contrastive learning framework that transforms raw time series into structured visual and textual perspectives. Rather than using natural language or real-world images, we construct both modalities directly from numerical sequences. We then align these views in a shared semantic space via contrastive learning, enabling the model to capture richer and more complementary representations. Furthermore, we introduce a variate selection module that leverages the aligned representations to identify the most informative variables for multivariate forecasting. Extensive experiments on fifteen short-term and six long-term forecasting benchmarks demonstrate that our approach consistently outperforms strong unimodal and cross-modal baselines, highlighting the effectiveness of multimodal alignment in enhancing time series forecasting. Code is available at: https://github.com/Ironieser/TimesCLIP.
Problem

Research questions and friction points this paper is trying to address.

Enhance time series forecasting with multimodal visual and textual representations
Overcome limitations of unimodal numerical inputs in capturing semantic patterns
Align visual and textual perspectives via contrastive learning for richer representations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multimodal contrastive learning for time series
Visual and textual alignment from numerical data
Variate selection using aligned multimodal representations
🔎 Similar Papers
No similar papers found.