🤖 AI Summary
Traditional fair learning methods require retraining models from scratch, rendering them incompatible with pre-trained models and leading to inconsistent updates and high validation costs in frequent-retraining scenarios. To address this, we propose a novel paradigm—*controlled model debiasing*—which explicitly optimizes for *minimal and interpretable prediction changes* while strictly bounding the discrepancy between old and new models to preserve predictive consistency. Our approach integrates a concept-driven architecture with adversarial learning, incorporating interpretability constraints and change sparsity regularization. It is model-agnostic, operates without access to sensitive attributes at inference time, and provides theoretical guarantees on fairness and stability. Evaluated on multiple benchmarks, our method achieves state-of-the-art fairness performance while reducing average prediction change by 37% compared to baselines—significantly enhancing trustworthiness and deployment robustness in high-stakes applications.
📝 Abstract
Traditional approaches to learning fair machine learning models often require rebuilding models from scratch, generally without accounting for potentially existing previous models. In a context where models need to be retrained frequently, this can lead to inconsistent model updates, as well as redundant and costly validation testing. To address this limitation, we introduce the notion of controlled model debiasing, a novel supervised learning task relying on two desiderata: that the differences between new fair model and the existing one should be (i) interpretable and (ii) minimal. After providing theoretical guarantees to this new problem, we introduce a novel algorithm for algorithmic fairness, COMMOD, that is both model-agnostic and does not require the sensitive attribute at test time. In addition, our algorithm is explicitly designed to enforce minimal and interpretable changes between biased and debiased predictions -a property that, while highly desirable in high-stakes applications, is rarely prioritized as an explicit objective in fairness literature. Our approach combines a concept-based architecture and adversarial learning and we demonstrate through empirical results that it achieves comparable performance to state-of-the-art debiasing methods while performing minimal and interpretable prediction changes.