🤖 AI Summary
Existing multimodal large language models often rely on superficial pattern matching for time series reasoning, lacking the principled ability to connect temporal observations with downstream outcomes. To address this limitation, this work proposes RationaleTS, a rationale-based in-context learning framework that uniquely treats reasoning pathways as proactive guidance rather than post-hoc explanations. RationaleTS generates structured rationales conditioned on labels and introduces a hybrid retrieval mechanism that integrates temporal patterns with semantic context to incorporate relevant reasoning priors. Evaluated across time series tasks in three distinct domains, RationaleTS significantly outperforms current baselines, demonstrating its effectiveness and efficiency in enhancing the model’s capacity for principled, causal-style reasoning.
📝 Abstract
The underperformance of existing multimodal large language models for time series reasoning lies in the absence of rationale priors that connect temporal observations to their downstream outcomes, which leads models to rely on superficial pattern matching rather than principled reasoning. We therefore propose the rationale-grounded in-context learning for time series reasoning, where rationales work as guiding reasoning units rather than post-hoc explanations, and develop the RationaleTS method. Specifically, we firstly induce label-conditioned rationales, composed of reasoning paths from observable evidence to the potential outcomes. Then, we design the hybrid retrieval by balancing temporal patterns and semantic contexts to retrieve correlated rationale priors for the final in-context inference on new samples. We conduct extensive experiments to demonstrate the effectiveness and efficiency of our proposed RationaleTS on three-domain time series reasoning tasks. We will release our code for reproduction.