A Catalog of Fairness-Aware Practices in Machine Learning Engineering

📅 2024-08-29
🏛️ arXiv.org
📈 Citations: 3
Influential: 0
📄 PDF
🤖 AI Summary
The widespread deployment of machine learning (ML) in decision-making systems introduces significant fairness risks—particularly concerning the handling of sensitive attributes and the protection of minority groups—while software engineering lacks a systematic, lifecycle-oriented framework for fairness engineering practices. Method: We conduct a systematic mapping study (SMS) combined with a comprehensive literature review to analyze fairness-related practices across the ML development lifecycle. Contribution/Results: We propose the first software engineering–centric fairness practice taxonomy, comprising 28 structured, actionable practices explicitly mapped to data preprocessing, modeling, and deployment stages. Each practice is annotated with its corresponding ML lifecycle phase and contextual applicability, thereby bridging the gap between fairness research and industrial implementation. This taxonomy serves as an integrable, operational guide for researchers and practitioners, enhancing the reliability, accountability, and trustworthiness of ML systems.

Technology Category

Application Category

📝 Abstract
Machine learning's widespread adoption in decision-making processes raises concerns about fairness, particularly regarding the treatment of sensitive features and potential discrimination against minorities. The software engineering community has responded by developing fairness-oriented metrics, empirical studies, and approaches. However, there remains a gap in understanding and categorizing practices for engineering fairness throughout the machine learning lifecycle. This paper presents a novel catalog of practices for addressing fairness in machine learning derived from a systematic mapping study. The study identifies and categorizes 28 practices from existing literature, mapping them onto different stages of the machine learning lifecycle. From this catalog, the authors extract actionable items and implications for both researchers and practitioners in software engineering. This work aims to provide a comprehensive resource for integrating fairness considerations into the development and deployment of machine learning systems, enhancing their reliability, accountability, and credibility.
Problem

Research questions and friction points this paper is trying to address.

Addressing fairness gaps in ML lifecycle practices
Cataloging fairness-aware methods for sensitive feature treatment
Providing actionable fairness guidelines for ML engineering
Innovation

Methods, ideas, or system contributions that make the work stand out.

Systematic mapping study identifies fairness practices
Catalog maps 28 practices to ML lifecycle
Actionable items for fairness in ML engineering
🔎 Similar Papers
No similar papers found.