🤖 AI Summary
Traditional predictive skill assessment relies on delayed, long-term outcome validation, suffering from significant temporal lag; while cognitive assessments offer rapid alternatives, optimal test batteries remain undefined. This study proposes an adaptive cognitive assessment framework specifically designed for forecasting ability evaluation, integrating Item Response Theory (IRT) and Bayesian algorithms to enable real-time, individualized item selection and adaptive test termination. Innovatively incorporating cognitive test battery theory, it constitutes the first adaptive system tailored for forecasting potential assessment. Validated on an independent dataset, the selected cognitive measures exhibit strong correlations with actual forecasting performance (r > 0.7), reduce average administration time by 60%, and significantly improve identification accuracy of top-decile high-potential forecasters—thereby overcoming the limitations of conventional lagged evaluation paradigms.
📝 Abstract
Assessing forecasting proficiency is a time-intensive activity, often requiring us to wait months or years before we know whether or not the reported forecasts were good. In this study, we develop adaptive cognitive tests that predict forecasting proficiency without the need to wait for forecast outcomes. Our procedures provide information about which cognitive tests to administer to each individual, as well as how many cognitive tests to administer. Using item response models, we identify and tailor cognitive tests to assess forecasters of different skill levels, aiming to optimize accuracy and efficiency. We show how the procedures can select highly-informative cognitive tests from a larger battery of tests, reducing the time taken to administer the tests. We use a second, independent dataset to show that the selected tests yield scores that are highly related to forecasting proficiency. This approach enables real-time, adaptive testing, providing immediate insights into forecasting talent in practical contexts.