Fairness-aware Bayes optimal functional classification

📅 2025-05-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses algorithmic fairness in functional data classification, aiming to bound predictive disparity across subpopulations within a pre-specified threshold. To this end, we establish the first theoretical framework that unifies fairness constraints with Bayes optimality in infinite-dimensional function spaces. We propose Fair-FLDA, a post-processing algorithm for functional linear discriminant analysis (FLDA), which achieves risk control via group-adaptive thresholding and feature-space regularization—yielding the first fairness-guaranteed variant of FLDA. Leveraging homoscedastic Gaussian process modeling and infinite-dimensional statistical inference, we prove that Fair-FLDA attains the optimal convergence rate under fairness constraints; its excess risk bound exhibits explicit dependence on key parameters and naturally recovers standard FLDA as a special case. Empirical evaluation on synthetic and real-world functional datasets confirms its effectiveness.

Technology Category

Application Category

📝 Abstract
Algorithmic fairness has become a central topic in machine learning, and mitigating disparities across different subpopulations has emerged as a rapidly growing research area. In this paper, we systematically study the classification of functional data under fairness constraints, ensuring the disparity level of the classifier is controlled below a pre-specified threshold. We propose a unified framework for fairness-aware functional classification, tackling an infinite-dimensional functional space, addressing key challenges from the absence of density ratios and intractability of posterior probabilities, and discussing unique phenomena in functional classification. We further design a post-processing algorithm, Fair Functional Linear Discriminant Analysis classifier (Fair-FLDA), which targets at homoscedastic Gaussian processes and achieves fairness via group-wise thresholding. Under weak structural assumptions on eigenspace, theoretical guarantees on fairness and excess risk controls are established. As a byproduct, our results cover the excess risk control of the standard FLDA as a special case, which, to the best of our knowledge, is first time seen. Our theoretical findings are complemented by extensive numerical experiments on synthetic and real datasets, highlighting the practicality of our designed algorithm.
Problem

Research questions and friction points this paper is trying to address.

Classify functional data under fairness constraints
Address challenges in infinite-dimensional functional space
Ensure fairness and risk control in classification
Innovation

Methods, ideas, or system contributions that make the work stand out.

Fairness-aware functional classification framework
Fair-FLDA algorithm with group-wise thresholding
Theoretical guarantees on fairness and risk
🔎 Similar Papers
No similar papers found.
X
Xiaoyu Hu
School of Mathematics and Statistics, Xi’an Jiaotong University
G
Gengyu Xue
Department of Statistics, University of Warwick
Zhenhua Lin
Zhenhua Lin
National University of Singapore
Functional data analysisHigh-dimensional data analysisnon-Euclidean data analysis
Y
Yi Yu
Department of Statistics, University of Warwick