🤖 AI Summary
Learning-to-Defer (L2D) in hybrid decision systems is vulnerable to adversarial perturbations, which can simultaneously corrupt both model predictions and deferral decisions; existing approaches are restricted to two-stage frameworks and lack formal robustness guarantees. Method: We propose the first end-to-end, single-stage adversarially robust L2D framework. It jointly optimizes the predictor and the deferrer, formalizes an adversarial attack model targeting both components, and introduces a cost-sensitive adversarial surrogate loss. Theoretical analysis establishes robustness guarantees under Huber (H), (R,F)-robustness, and Bayesian consistency. Contribution/Results: Extensive experiments demonstrate significant improvements in robustness against both untargeted and targeted adversarial attacks on classification and regression tasks, while preserving original predictive performance—without degradation in clean accuracy.
📝 Abstract
Learning-to-Defer (L2D) enables hybrid decision-making by routing inputs either to a predictor or to external experts. While promising, L2D is highly vulnerable to adversarial perturbations, which can not only flip predictions but also manipulate deferral decisions. Prior robustness analyses focus solely on two-stage settings, leaving open the end-to-end (one-stage) case where predictor and allocation are trained jointly. We introduce the first framework for adversarial robustness in one-stage L2D, covering both classification and regression. Our approach formalizes attacks, proposes cost-sensitive adversarial surrogate losses, and establishes theoretical guarantees including $mathcal{H}$, $(mathcal{R }, mathcal{F})$, and Bayes consistency. Experiments on benchmark datasets confirm that our methods improve robustness against untargeted and targeted attacks while preserving clean performance.