Auditability and the Landscape of Distance to Multicalibration

📅 2025-09-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing multicalibration metrics lack simultaneous auditability and interpretable quantification of required model modifications. Method: We propose two novel information-theoretic, audit-friendly distance measures—differentiable Multicalibration (dMC) and Intersection Multicalibration—both rigorously equivalent to the geometric distance from the predictor to the set of perfectly multicalibrated predictors. These measures precisely quantify the magnitude of necessary adjustments and support verifiable auditing within the differentiable Calibration Error (dCE) framework. Contribution/Results: Theoretical analysis characterizes the loss landscape structure of multicalibration error and the geometric properties of its optimal solution set. Unlike prior generalizations, our metrics are the first to jointly satisfy auditability, reasonableness, and interpretability of modification magnitude. They establish a rigorous foundation for assessing multi-group fairness and robust uncertainty modeling, offering both theoretical insight and practical utility.

Technology Category

Application Category

📝 Abstract
Calibration is a critical property for establishing the trustworthiness of predictors that provide uncertainty estimates. Multicalibration is a strengthening of calibration which requires that predictors be calibrated on a potentially overlapping collection of subsets of the domain. As multicalibration grows in popularity with practitioners, an essential question is: how do we measure how multicalibrated a predictor is? Błasiok et al. (2023) considered this question for standard calibration by introducing the distance to calibration framework (dCE) to understand how calibration metrics relate to each other and the ground truth. Building on the dCE framework, we consider the auditability of the distance to multicalibration of a predictor $f$. We begin by considering two natural generalizations of dCE to multiple subgroups: worst group dCE (wdMC), and distance to multicalibration (dMC). We argue that there are two essential properties of any multicalibration error metric: 1) the metric should capture how much $f$ would need to be modified in order to be perfectly multicalibrated; and 2) the metric should be auditable in an information theoretic sense. We show that wdMC and dMC each fail to satisfy one of these two properties, and that similar barriers arise when considering the auditability of general distance to multigroup fairness notions. We then propose two (equivalent) multicalibration metrics which do satisfy these requirements: 1) a continuized variant of dMC; and 2) a distance to intersection multicalibration, which leans on intersectional fairness desiderata. Along the way, we shed light on the loss-landscape of distance to multicalibration and the geometry of the set of perfectly multicalibrated predictors. Our findings may have implications for the development of stronger multicalibration algorithms as well as multigroup auditing more generally.
Problem

Research questions and friction points this paper is trying to address.

Measuring how multicalibrated a predictor is for uncertainty estimates
Developing auditable metrics for distance to multicalibration of predictors
Addressing limitations of existing multicalibration error metric generalizations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Proposes continuized dMC variant for multicalibration auditability
Introduces distance to intersection multicalibration metric
Analyzes loss landscape and geometry of multicalibrated predictors
🔎 Similar Papers
No similar papers found.
N
Nathan Derhake
University of Southern California
Siddartha Devic
Siddartha Devic
PhD Student, University of Southern California
machine learning theoryalgorithmic fairness
D
Dutch Hansen
University of Southern California
K
Kuan Liu
University of Southern California
Vatsal Sharan
Vatsal Sharan
Stanford University