Beyond Naïve Prompting: Strategies for Improved Zero-shot Context-aided Forecasting with LLMs

📅 2025-08-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the limited performance of large language models (LLMs) in zero-shot in-context prediction, this paper proposes four novel prompting strategies: ReDP (enhancing reasoning interpretability), CorDP (calibrating prediction confidence), IC-DP (embedding historical examples to improve generalization), and RouteDP (dynamically routing tasks by difficulty to suitably sized models). The methods integrate reasoning trace generation, context refinement, example injection, and resource-aware scheduling—requiring no fine-tuning or additional training. Evaluated on the CiK multi-task benchmark, all strategies significantly outperform naive zero-shot prompting, achieving an average accuracy gain of 12.7% across diverse open- and closed-source LLMs of varying scales. The core contribution is the first systematic prompting framework specifically designed for zero-shot in-context prediction, which simultaneously improves accuracy, interpretability, and computational efficiency while strictly preserving the zero-shot setting.

Technology Category

Application Category

📝 Abstract
Forecasting in real-world settings requires models to integrate not only historical data but also relevant contextual information, often available in textual form. While recent work has shown that large language models (LLMs) can be effective context-aided forecasters via naïve direct prompting, their full potential remains underexplored. We address this gap with 4 strategies, providing new insights into the zero-shot capabilities of LLMs in this setting. ReDP improves interpretability by eliciting explicit reasoning traces, allowing us to assess the model's reasoning over the context independently from its forecast accuracy. CorDP leverages LLMs solely to refine existing forecasts with context, enhancing their applicability in real-world forecasting pipelines. IC-DP proposes embedding historical examples of context-aided forecasting tasks in the prompt, substantially improving accuracy even for the largest models. Finally, RouteDP optimizes resource efficiency by using LLMs to estimate task difficulty, and routing the most challenging tasks to larger models. Evaluated on different kinds of context-aided forecasting tasks from the CiK benchmark, our strategies demonstrate distinct benefits over naïve prompting across LLMs of different sizes and families. These results open the door to further simple yet effective improvements in LLM-based context-aided forecasting.
Problem

Research questions and friction points this paper is trying to address.

Improving zero-shot context-aided forecasting with LLMs
Enhancing interpretability and accuracy in forecasting tasks
Optimizing resource efficiency for context-aided forecasting
Innovation

Methods, ideas, or system contributions that make the work stand out.

ReDP enhances interpretability via explicit reasoning traces
CorDP refines forecasts using context for real-world pipelines
IC-DP embeds historical examples to boost accuracy
🔎 Similar Papers
No similar papers found.