🤖 AI Summary
This work addresses the challenge that existing datasets often contain low-quality, unverifiable contextual information, which hinders multimodal models from outperforming unimodal approaches in context-augmented forecasting. To overcome this limitation, the authors propose a semi-synthetic data augmentation strategy to construct CAF-7M—a large-scale, verifiable dataset comprising 7 million high-quality context–time series pairs—where each textual context both describes temporal dynamics and complements historical numerical values. Experimental results demonstrate that data quality, rather than model architecture, is the primary bottleneck in performance; leveraging such high-fidelity contextual information significantly improves forecasting accuracy in real-world scenarios.
📝 Abstract
Context-aided forecasting (CAF) holds promise for integrating domain knowledge and forward-looking information, enabling AI systems to surpass traditional statistical methods. However, recent empirical studies reveal a puzzling gap: multimodal models often fail to outperform their unimodal counterparts. We hypothesize that this underperformance stems from poor context quality in existing datasets, as verification is challenging. To address these limitations, we introduce a semi-synthetic data augmentation method that generates contexts both descriptive of temporal dynamics and verifiably complementary to numerical histories. This approach enables massive-scale dataset creation, resulting in CAF-7M, a corpus of 7 million context-augmented time series windows, including a rigorously verified test set. We demonstrate that semi-synthetic pre-training transfers effectively to real-world evaluation, and show clear evidence of context utilization. Our results suggest that dataset quality, rather than architectural limitations, has been the primary bottleneck in context-aided forecasting.