🤖 AI Summary
Existing probabilistic forecasting evaluation metrics primarily emphasize predictive accuracy while neglecting their practical utility in downstream decision-making tasks, leading to a misalignment between evaluation and application. To address this, we propose a data-driven evaluation alignment framework that formulates the learning of a surrogate evaluation function as an end-to-end optimization problem. Leveraging proper scoring rule theory, our approach employs a neural network-parameterized weighted scoring rule to automatically learn an evaluation function aligned with downstream objectives—without assuming any prior cost structure. This work is the first to formalize evaluation alignment as a learnable problem, combining theoretical rigor with engineering scalability. Experiments on synthetic and real-world regression tasks demonstrate its effectiveness: it significantly reduces the gap between evaluation scores and downstream decision utility, enabling rapid, task-adaptive model selection and hyperparameter tuning.
📝 Abstract
Every prediction is ultimately used in a downstream task. Consequently, evaluating prediction quality is more meaningful when considered in the context of its downstream use. Metrics based solely on predictive performance often diverge from measures of real-world downstream impact. Existing approaches incorporate the downstream view by relying on multiple task-specific metrics, which can be burdensome to analyze, or by formulating cost-sensitive evaluations that require an explicit cost structure, typically assumed to be known a priori. We frame this mismatch as an evaluation alignment problem and propose a data-driven method to learn a proxy evaluation function aligned with the downstream evaluation. Building on the theory of proper scoring rules, we explore transformations of scoring rules that ensure the preservation of propriety. Our approach leverages weighted scoring rules parametrized by a neural network, where weighting is learned to align with the performance in the downstream task. This enables fast and scalable evaluation cycles across tasks where the weighting is complex or unknown a priori. We showcase our framework through synthetic and real-data experiments for regression tasks, demonstrating its potential to bridge the gap between predictive evaluation and downstream utility in modular prediction systems.