🤖 AI Summary
This work addresses the limitations of existing large language models in time series question answering, where they struggle to effectively model critical temporal patterns such as trends and seasonality and are often dominated by easy samples during mixed-difficulty task training, thereby undermining complex reasoning capabilities. To overcome these challenges, the authors propose PATRA, a novel framework that explicitly captures trend and seasonal components through a pattern-aware alignment mechanism and introduces a task-aware balanced reward strategy within a reinforcement learning paradigm to harmonize learning across tasks of varying difficulty. This approach fosters the generation of coherent chains of thought. Experimental results demonstrate that PATRA significantly outperforms strong baselines across diverse time series question-answering benchmarks, exhibiting superior cross-modal understanding and deep reasoning abilities.
📝 Abstract
Time series reasoning demands both the perception of complex dynamics and logical depth. However, existing LLM-based approaches exhibit two limitations: they often treat time series merely as text or images, failing to capture the patterns like trends and seasonalities needed to answer specific questions; and when trained on a mix of simple and complex tasks, simpler objectives often dominate the learning process, hindering the development of deep reasoning capabilities. To address these limitations, we propose the Pattern-Aware Alignment and Balanced Reasoning model (PATRA), introducing a pattern-aware mechanism that extracts trend and seasonality patterns from time series to achieve deep alignment. Furthermore, we design a task-aware balanced reward to harmonize learning across tasks of varying difficulty, incentivizing the generation of coherent Chains of Thought. Extensive experiments show that PATRA outperforms strong baselines across diverse Time Series Question Answering (TSQA) tasks, demonstrating superior cross-modal understanding and reasoning capability.