PAC-Bayesian Generalization Guarantees for Fairness on Stochastic and Deterministic Classifiers

📅 2026-02-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the theoretical challenge of simultaneously guaranteeing predictive risk and fairness constraints within traditional PAC generalization bounds. We propose the first PAC-Bayesian framework for fairness-aware generalization analysis, applicable to both randomized and deterministic classifiers, and compatible with a broad class of fairness metrics expressible as risk differences. By jointly optimizing the generalization bounds on both prediction error and fairness violation, our approach enables self-bounded learning. Empirical evaluations across three canonical fairness metrics demonstrate the effectiveness and tightness of the derived bounds. Notably, this study extends PAC-Bayes theory to the analysis of fairness in deterministic classifiers, thereby establishing the first theoretical foundation for this setting and filling a critical gap in the existing literature.

Technology Category

Application Category

📝 Abstract
Classical PAC generalization bounds on the prediction risk of a classifier are insufficient to provide theoretical guarantees on fairness when the goal is to learn models balancing predictive risk and fairness constraints. We propose a PAC-Bayesian framework for deriving generalization bounds for fairness, covering both stochastic and deterministic classifiers. For stochastic classifiers, we derive a fairness bound using standard PAC-Bayes techniques. Whereas for deterministic classifiers, as usual PAC-Bayes arguments do not apply directly, we leverage a recent advance in PAC-Bayes to extend the fairness bound beyond the stochastic setting. Our framework has two advantages: (i) It applies to a broad class of fairness measures that can be expressed as a risk discrepancy, and (ii) it leads to a self-bounding algorithm in which the learning procedure directly optimizes a trade-off between generalization bounds on the prediction risk and on the fairness. We empirically evaluate our framework with three classical fairness measures, demonstrating not only its usefulness but also the tightness of our bounds.
Problem

Research questions and friction points this paper is trying to address.

PAC-Bayesian
fairness
generalization bounds
stochastic classifiers
deterministic classifiers
Innovation

Methods, ideas, or system contributions that make the work stand out.

PAC-Bayesian
fairness generalization
stochastic classifiers
deterministic classifiers
risk discrepancy
🔎 Similar Papers
No similar papers found.
J
Julien Bastian
Université Jean Monnet Saint-Étienne, CNRS, Institut d'Optique Graduate School, Laboratoire Hubert Curien UMR 5516, F-42023, Saint-Etienne, France
B
Benjamin Leblanc
Département d'informatique et de génie logiciel, Université Laval, Québec, Canada
Pascal Germain
Pascal Germain
Associate Professor, Université Laval
Machine Learning
Amaury Habrard
Amaury Habrard
Professor of Computer Science, University Jean Monnet of Saint-Etienne (France)
machine learning
Christine Largeron
Christine Largeron
Professor of Computer Science, Jean Monnet University
Data miningInformation retrievalSocial networkMachine learning
G
Guillaume Metzler
Université Lumiere Lyon 2, Universite Claude Bernard Lyon 1, ERIC, 69007, Lyon, France
Emilie Morvant
Emilie Morvant
Associate Professor (HDR), University of Saint-Etienne (France), Hubert Curien Laboratory
Machine LearningStatistical Machine Learning
Paul Viallard
Paul Viallard
Researcher, LACODAM Team, INRIA Rennes, IRISA
Machine Learning