🤖 AI Summary
AI fairness research often assumes access to complete demographic information; however, in practice, such data is frequently unavailable due to privacy regulations or ethical constraints, hindering real-world deployment of existing fairness methods. This paper addresses the setting of *incomplete sensitive attributes*, proposing the first systematic fairness classification framework for this scenario. It rigorously analyzes the logical relationships and applicability boundaries among fairness definitions, exposing fundamental limitations of conventional approaches. By integrating de-identification, privacy-preserving techniques, and weakly supervised learning, the paper establishes a novel fair modeling paradigm that operates without requiring full sensitive attribute annotations. Finally, it synthesizes key open challenges and future research directions, providing both theoretical foundations and practical guidance for transitioning AI fairness from idealized assumptions to operational reality. (149 words)
📝 Abstract
Fairness in artificial intelligence (AI) has become a growing concern due to discriminatory outcomes in AI-based decision-making systems. While various methods have been proposed to mitigate bias, most rely on complete demographic information, an assumption often impractical due to legal constraints and the risk of reinforcing discrimination. This survey examines fairness in AI when demographics are incomplete, addressing the gap between traditional approaches and real-world challenges. We introduce a novel taxonomy of fairness notions in this setting, clarifying their relationships and distinctions. Additionally, we summarize existing techniques that promote fairness beyond complete demographics and highlight open research questions to encourage further progress in the field.