Ethical Fairness without Demographics in Human-Centered AI

📅 2026-03-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of achieving ethical fairness in human-centered AI applications—such as healthcare and education—where conventional fairness methods relying on sensitive attributes are often infeasible due to privacy constraints or missing demographic data, and where statistical parity may conflict with ethical principles. To this end, the authors propose Flare, a novel framework that, for the first time, integrates Fisher information-guided latent subgroup discovery with a “do-no-harm” regularization to optimize fairness without requiring explicit demographic information. Flare identifies performance disparities through geometric analyses of representations, loss landscapes, and curvature signals, and simultaneously enhances performance across all inferred subgroups. The study also introduces BHE (Beneficence–Harm Avoidance–Equity), a new metric for ethically aligned fairness evaluation. Experiments on multiple physiological and clinical datasets—including EDA, IHS, and OhioT1DM—demonstrate that Flare consistently outperforms existing approaches, improving ethical fairness while maintaining strong overall performance.

Technology Category

Application Category

📝 Abstract
Computational models are increasingly embedded in human-centered domains such as healthcare, education, workplace analytics, and digital well-being, where their predictions directly influence individual outcomes and collective welfare. In such contexts, achieving high accuracy alone is insufficient; models must also act ethically and equitably across diverse populations. However, fair AI approaches that rely on demographic attributes are impractical, as such information is often unavailable, privacy-sensitive, or restricted by regulatory frameworks. Moreover, conventional parity-based fairness approaches, while aiming for equity, can inadvertently violate core ethical principles by trading off subgroup performance or stability. To address this challenge, we present Flare (Fisher-guided LAtent-subgroup learning with do-no-harm REgularization), the first demographic-agnostic framework that aligns algorithmic fairness with ethical principles through the geometry of optimization. Flare leverages Fisher Information to regularize curvature, uncovering latent disparities in model behavior without access to demographic or sensitive attributes. By integrating representation, loss, and curvature signals, it identifies hidden performance strata and adaptively refines them through collaborative but do-no-harm optimization, enhancing each subgroup's performance while preserving global stability and ethical balance. We also introduce BHE (Beneficence-Harm Avoidance-Equity), a novel metric suite that operationalizes ethical fairness evaluation beyond statistical parity. Extensive evaluations across diverse physiological (EDA), behavioral (IHS), and clinical (OhioT1DM) datasets show that Flare consistently enhances ethical fairness compared to state-of-the-art baselines.
Problem

Research questions and friction points this paper is trying to address.

Ethical Fairness
Demographic-Agnostic AI
Algorithmic Fairness
Human-Centered AI
Latent Subgroups
Innovation

Methods, ideas, or system contributions that make the work stand out.

demographic-agnostic fairness
Fisher Information regularization
do-no-harm optimization
latent subgroup discovery
ethical fairness metrics
🔎 Similar Papers
No similar papers found.