Be Intentional About Fairness!: Fairness, Size, and Multiplicity in the Rashomon Set

πŸ“… 2025-01-26
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses fairness optimization within the Rashomon setβ€”the collection of models exhibiting statistically equivalent predictive performance. It investigates whether arbitrary model selection from this set, when accuracy is comparable across candidates, may induce substantial unfairness. The study provides the first systematic characterization of the Rashomon set’s cardinality, error-tolerance distribution, and per-instance prediction flip probability. Methodologically, it introduces fairness-aware random sampling and optimal fair model identification, integrating theoretical analysis, probabilistic modeling, and statistical fairness metrics (e.g., statistical parity). Empirical results demonstrate that fairness disparities across models in the Rashomon set span orders of magnitude, with highly unequal impacts across demographic groups under random selection. These findings establish a theoretical foundation for *proactive fairness pursuit* in AI governance and deliver scalable algorithmic pathways to embed fairness without compromising predictive fidelity.

Technology Category

Application Category

πŸ“ Abstract
When selecting a model from a set of equally performant models, how much unfairness can you really reduce? Is it important to be intentional about fairness when choosing among this set, or is arbitrarily choosing among the set of ''good'' models good enough? Recent work has highlighted that the phenomenon of model multiplicity-where multiple models with nearly identical predictive accuracy exist for the same task-has both positive and negative implications for fairness, from strengthening the enforcement of civil rights law in AI systems to showcasing arbitrariness in AI decision-making. Despite the enormous implications of model multiplicity, there is little work that explores the properties of sets of equally accurate models, or Rashomon sets, in general. In this paper, we present five main theoretical and methodological contributions which help us to understand the relatively unexplored properties of the Rashomon set, in particular with regards to fairness. Our contributions include methods for efficiently sampling models from this set and techniques for identifying the fairest models according to key fairness metrics such as statistical parity. We also derive the probability that an individual's prediction will be flipped within the Rashomon set, as well as expressions for the set's size and the distribution of error tolerance used across models. These results lead to policy-relevant takeaways, such as the importance of intentionally looking for fair models within the Rashomon set, and understanding which individuals or groups may be more susceptible to arbitrary decisions.
Problem

Research questions and friction points this paper is trying to address.

Model Selection
Fairness
Complexity
Innovation

Methods, ideas, or system contributions that make the work stand out.

Rashomon set properties
fairness optimization
prediction uncertainty
πŸ”Ž Similar Papers
No similar papers found.