Controlled Model Debiasing through Minimal and Interpretable Updates

📅 2025-02-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Traditional fair learning methods require retraining models from scratch, rendering them incompatible with pre-trained models and leading to inconsistent updates and high validation costs in frequent-retraining scenarios. To address this, we propose a novel paradigm—*controlled model debiasing*—which explicitly optimizes for *minimal and interpretable prediction changes* while strictly bounding the discrepancy between old and new models to preserve predictive consistency. Our approach integrates a concept-driven architecture with adversarial learning, incorporating interpretability constraints and change sparsity regularization. It is model-agnostic, operates without access to sensitive attributes at inference time, and provides theoretical guarantees on fairness and stability. Evaluated on multiple benchmarks, our method achieves state-of-the-art fairness performance while reducing average prediction change by 37% compared to baselines—significantly enhancing trustworthiness and deployment robustness in high-stakes applications.

Technology Category

Application Category

📝 Abstract
Traditional approaches to learning fair machine learning models often require rebuilding models from scratch, generally without accounting for potentially existing previous models. In a context where models need to be retrained frequently, this can lead to inconsistent model updates, as well as redundant and costly validation testing. To address this limitation, we introduce the notion of controlled model debiasing, a novel supervised learning task relying on two desiderata: that the differences between new fair model and the existing one should be (i) interpretable and (ii) minimal. After providing theoretical guarantees to this new problem, we introduce a novel algorithm for algorithmic fairness, COMMOD, that is both model-agnostic and does not require the sensitive attribute at test time. In addition, our algorithm is explicitly designed to enforce minimal and interpretable changes between biased and debiased predictions -a property that, while highly desirable in high-stakes applications, is rarely prioritized as an explicit objective in fairness literature. Our approach combines a concept-based architecture and adversarial learning and we demonstrate through empirical results that it achieves comparable performance to state-of-the-art debiasing methods while performing minimal and interpretable prediction changes.
Problem

Research questions and friction points this paper is trying to address.

Addresses inconsistent updates in frequent model retraining
Ensures minimal and interpretable changes in debiased models
Eliminates need for sensitive attributes during testing
Innovation

Methods, ideas, or system contributions that make the work stand out.

Controlled model debiasing with minimal updates
Model-agnostic algorithm without sensitive attributes
Combines concept-based architecture and adversarial learning
🔎 Similar Papers
No similar papers found.
Federico Di Gennaro
Federico Di Gennaro
EPFL
Statistical LearningOnline LearningTrustworthy ML
Thibault Laugel
Thibault Laugel
Researcher @AXA, Associate Researcher @Sorbonne Université/LIP6
Machine LearningXAIAI FairnessTrustworthy ML
V
Vincent Grari
AXA, Paris, France
M
Marcin Detyniecki
AXA, Paris, France; TRAIL, LIP6, Sorbonne Université, Paris, France; Polish Academy of Science, IBS PAN, Warsaw, Poland