🤖 AI Summary
This study investigates whether large language models (LLMs) can enhance the accuracy of macroeconomic expert forecasts through ensemble learning. Leveraging data from the European Central Bank’s Survey of Professional Forecasters, we develop the first LLM-driven framework for expert forecast aggregation, systematically modeling three key behavioral features: forecast dispersion intensity, herding dynamics, and attention constraints. Methodologically, we integrate LLM-based semantic reasoning with classical ensemble learning paradigms and conduct multi-scenario empirical evaluations using standard macroeconomic forecasting metrics. Results demonstrate that the LLM-enhanced ensemble significantly outperforms conventional equal-weighted and regression-weighted combinations, reducing average prediction errors by 12.7% in high-dispersion and strong-herding regimes. Our primary contribution is the novel integration of LLMs into expert forecast synthesis—revealing their structural role in mitigating cognitive biases and enabling more adaptive, behaviorally informed weight allocation.
📝 Abstract
This study explores the potential of large language models (LLMs) to enhance expert forecasting through ensemble learning. Leveraging the European Central Bank's Survey of Professional Forecasters (SPF) dataset, we propose a comprehensive framework to evaluate LLM-driven ensemble predictions under varying conditions, including the intensity of expert disagreement, dynamics of herd behavior, and limitations in attention allocation.