On the Bayes Inconsistency of Disagreement Discrepancy Surrogates

📅 2025-12-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Deep neural networks suffer from degraded reliability under distributional shift in real-world settings; disagreement discrepancy serves as a key robustness metric, yet its 0–1 loss is non-differentiable, necessitating surrogate losses. Method: We theoretically establish, for the first time, that existing surrogates are Bayes-inconsistent—maximizing them does not guarantee maximization of the true disagreement discrepancy. To address this, we propose a novel Bayes-consistent surrogate loss, derived from tight upper and lower bounds of disagreement discrepancy and instantiated via cross-entropy, yielding a differentiable, end-to-end optimizable objective. Contribution/Results: Extensive experiments across multiple benchmark datasets and adversarial distribution shift scenarios demonstrate that our method significantly improves both the accuracy and robustness of disagreement discrepancy estimation, thereby providing a more principled theoretical foundation and practical tool for out-of-distribution generalization.

Technology Category

Application Category

📝 Abstract
Deep neural networks often fail when deployed in real-world contexts due to distribution shift, a critical barrier to building safe and reliable systems. An emerging approach to address this problem relies on emph{disagreement discrepancy} -- a measure of how the disagreement between two models changes under a shifting distribution. The process of maximizing this measure has seen applications in bounding error under shifts, testing for harmful shifts, and training more robust models. However, this optimization involves the non-differentiable zero-one loss, necessitating the use of practical surrogate losses. We prove that existing surrogates for disagreement discrepancy are not Bayes consistent, revealing a fundamental flaw: maximizing these surrogates can fail to maximize the true disagreement discrepancy. To address this, we introduce new theoretical results providing both upper and lower bounds on the optimality gap for such surrogates. Guided by this theory, we propose a novel disagreement loss that, when paired with cross-entropy, yields a provably consistent surrogate for disagreement discrepancy. Empirical evaluations across diverse benchmarks demonstrate that our method provides more accurate and robust estimates of disagreement discrepancy than existing approaches, particularly under challenging adversarial conditions.
Problem

Research questions and friction points this paper is trying to address.

Proves existing surrogates for disagreement discrepancy lack Bayes consistency
Introduces theoretical bounds on optimality gap for such surrogates
Proposes a novel provably consistent surrogate loss for disagreement discrepancy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Proving inconsistency of existing surrogate losses
Introducing novel provably consistent disagreement loss
Validating method across diverse adversarial benchmarks
🔎 Similar Papers
No similar papers found.
N
Neil G. Marchant
School of Computing & Information Systems, University of Melbourne, Australia
A
Andrew C. Cullen
School of Computing & Information Systems, University of Melbourne, Australia
F
Feng Liu
School of Computing & Information Systems, University of Melbourne, Australia
Sarah M. Erfani
Sarah M. Erfani
Associate Professor, Computing and Information Systems, University of Melbourne
Machine LearningAI SafetyCyber Security