🤖 AI Summary
This paper investigates property elicitation under the imprecise probability (IP) framework, focusing on Γ-maximin risk minimization—a multi-distribution robust learning paradigm. Addressing the limitation of standard single-distribution risk minimization in capturing IP uncertainty, the work establishes, for the first time, necessary and sufficient conditions for property elicitation in the IP setting. It introduces the notion of a *Bayesian pair* to semantically interpret the elicited property and reveals its intrinsic connection to the maximum Bayesian risk distribution. By integrating decision theory, convex analysis, and Bayesian inference, the paper characterizes the theoretical boundaries of IP property elicitation, thereby extending classical elicitation theory to multi-distribution robust learning. The results provide both an interpretability foundation and rigorous theoretical support for imprecise probability modeling. (138 words)
📝 Abstract
Property elicitation studies which attributes of a probability distribution can be determined by minimising a risk. We investigate a generalisation of property elicitation to imprecise probabilities (IP). This investigation is motivated by multi-distribution learning, which takes the classical machine learning paradigm of minimising a single risk over a (precise) probability and replaces it with $Γ$-maximin risk minimization over an IP. We provide necessary conditions for elicitability of a IP-property. Furthermore, we explain what an elicitable IP-property actually elicits through Bayes pairs -- the elicited IP-property is the corresponding standard property of the maximum Bayes risk distribution.