Revisiting Pre-processing Group Fairness: A Modular Benchmarking Framework

πŸ“… 2025-08-20
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Fairness pre-processing methods offer model-agnosticism and privacy preservation but suffer from poor comparability and reproducibility due to the absence of standardized evaluation frameworks. To address this, we propose FairPrepβ€”the first modular, extensible benchmarking framework for fairness pre-processing on tabular data. Built upon AIF360, FairPrep unifies datasets, pre-processing algorithms, and downstream models into a cohesive pipeline. It enables automated large-scale experiments and jointly evaluates multidimensional fairness metrics (e.g., statistical parity, equal opportunity) alongside utility measures (e.g., accuracy, F1-score), generating standardized, interpretable reports. By providing a unified, open-source infrastructure, FairPrep bridges the critical gap in empirical evaluation of data-level fairness interventions. It significantly enhances experimental reproducibility and cross-method comparability, establishing a practical, community-oriented benchmark for fair machine learning research.

Technology Category

Application Category

πŸ“ Abstract
As machine learning systems become increasingly integrated into high-stakes decision-making processes, ensuring fairness in algorithmic outcomes has become a critical concern. Methods to mitigate bias typically fall into three categories: pre-processing, in-processing, and post-processing. While significant attention has been devoted to the latter two, pre-processing methods, which operate at the data level and offer advantages such as model-agnosticism and improved privacy compliance, have received comparatively less focus and lack standardised evaluation tools. In this work, we introduce FairPrep, an extensible and modular benchmarking framework designed to evaluate fairness-aware pre-processing techniques on tabular datasets. Built on the AIF360 platform, FairPrep allows seamless integration of datasets, fairness interventions, and predictive models. It features a batch-processing interface that enables efficient experimentation and automatic reporting of fairness and utility metrics. By offering standardised pipelines and supporting reproducible evaluations, FairPrep fills a critical gap in the fairness benchmarking landscape and provides a practical foundation for advancing data-level fairness research.
Problem

Research questions and friction points this paper is trying to address.

Evaluating fairness-aware pre-processing techniques for tabular datasets
Addressing lack of standardized benchmarking tools for data-level fairness methods
Providing reproducible evaluation framework for model-agnostic bias mitigation approaches
Innovation

Methods, ideas, or system contributions that make the work stand out.

Modular framework for fairness pre-processing
Batch-processing interface for efficient experimentation
Standardized pipelines for reproducible fairness evaluations
πŸ”Ž Similar Papers
No similar papers found.