Bias by Design? How Data Practices Shape Fairness in AI Healthcare Systems

📅 2025-10-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses historical, representational, and measurement biases in AI healthcare systems arising from flawed data collection design—biases that undermine fairness across gender, age, and socioeconomic status. Through multi-case analysis—including the AI4HealthyAging initiative—bias taxonomy modeling, and fairness evaluation, we systematically identify key sources and propagation pathways of bias in clinical data. We propose the first fairness-by-design framework specifically targeting the data collection phase, thereby filling a critical gap in bias mitigation research at this early stage. The framework yields generalizable strategies for refining clinical problem formulation and actionable interventions for data curation practices. Empirical validation demonstrates substantial improvements in model robustness and cross-population fairness. Our work provides a methodological foundation for developing trustworthy, equitable AI in healthcare. (149 words)

Technology Category

Application Category

📝 Abstract
Artificial intelligence (AI) holds great promise for transforming healthcare. However, despite significant advances, the integration of AI solutions into real-world clinical practice remains limited. A major barrier is the quality and fairness of training data, which is often compromised by biased data collection practices. This paper draws on insights from the AI4HealthyAging project, part of Spain's national R&D initiative, where our task was to detect biases during clinical data collection. We identify several types of bias across multiple use cases, including historical, representation, and measurement biases. These biases manifest in variables such as sex, gender, age, habitat, socioeconomic status, equipment, and labeling. We conclude with practical recommendations for improving the fairness and robustness of clinical problem design and data collection. We hope that our findings and experience contribute to guiding future projects in the development of fairer AI systems in healthcare.
Problem

Research questions and friction points this paper is trying to address.

Identifying biases in clinical data collection practices
Analyzing how data biases affect AI healthcare system fairness
Providing recommendations for fairer clinical data design
Innovation

Methods, ideas, or system contributions that make the work stand out.

Detect biases during clinical data collection
Identify historical representation measurement biases
Provide recommendations for fairer clinical data practices
🔎 Similar Papers
No similar papers found.