π€ AI Summary
Two-stage learning-and-defer-to-human (L2D) systems exhibit critical adversarial robustness deficiencies in human-AI collaboration settings, rendering them vulnerable to misleading query attacks and risking human decision-maker overload. Method: We propose two novel adversarial attack strategies and introduce SARDβa unified algorithm grounded in a convex optimization framework based on the cross-entropy family loss. SARD integrates Bayesian risk minimization with (β,π’)-consistency theory to jointly guarantee Bayesian-optimal decisions and consistent human-AI task allocation. Contribution/Results: Evaluated across classification, regression, and multi-task benchmarks, SARD significantly enhances robustness under adversarial perturbations, ensuring both reliability and optimality of task delegation in human-AI collaborative decision-making.
π Abstract
Learning-to-Defer (L2D) facilitates optimal task allocation between AI systems and decision-makers. Despite its potential, we show that current two-stage L2D frameworks are highly vulnerable to adversarial attacks, which can misdirect queries or overwhelm decision agents, significantly degrading system performance. This paper conducts the first comprehensive analysis of adversarial robustness in two-stage L2D frameworks. We introduce two novel attack strategies -- untargeted and targeted -- that exploit inherent structural vulnerabilities in these systems. To mitigate these threats, we propose SARD, a robust, convex, deferral algorithm rooted in Bayes and $(mathcal{R},mathcal{G})$-consistency. Our approach guarantees optimal task allocation under adversarial perturbations for all surrogates in the cross-entropy family. Extensive experiments on classification, regression, and multi-task benchmarks validate the robustness of SARD.