FairTTTS: A Tree Test Time Simulation Method for Fairness-Aware Classification

📅 2025-01-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address algorithmic bias in machine learning classification models against disadvantaged groups, this paper proposes a test-time fairness correction method for tree-based models that requires no retraining. The method introduces Tree-based Test-Time Simulation (TTTS)—the first application of TTTS to fairness optimization—and designs a distance-aware protected-attribute node decision adjustment mechanism to jointly improve both accuracy and fairness. It supports multiple fairness metrics and is compatible with post-processing bias mitigation and path reweighting. Experiments across seven benchmark datasets demonstrate an average fairness improvement of 20.96%, significantly outperforming baseline and state-of-the-art methods. Notably, prediction accuracy increases by 0.55%, whereas competing approaches incur an average accuracy drop of 0.42%.

Technology Category

Application Category

📝 Abstract
Algorithmic decision-making has become deeply ingrained in many domains, yet biases in machine learning models can still produce discriminatory outcomes, often harming unprivileged groups. Achieving fair classification is inherently challenging, requiring a careful balance between predictive performance and ethical considerations. We present FairTTTS, a novel post-processing bias mitigation method inspired by the Tree Test Time Simulation (TTTS) method. Originally developed to enhance accuracy and robustness against adversarial inputs through probabilistic decision-path adjustments, TTTS serves as the foundation for FairTTTS. By building on this accuracy-enhancing technique, FairTTTS mitigates bias and improves predictive performance. FairTTTS uses a distance-based heuristic to adjust decisions at protected attribute nodes, ensuring fairness for unprivileged samples. This fairness-oriented adjustment occurs as a post-processing step, allowing FairTTTS to be applied to pre-trained models, diverse datasets, and various fairness metrics without retraining. Extensive evaluation on seven benchmark datasets shows that FairTTTS outperforms traditional methods in fairness improvement, achieving a 20.96% average increase over the baseline compared to 18.78% for related work, and further enhances accuracy by 0.55%. In contrast, competing methods typically reduce accuracy by 0.42%. These results confirm that FairTTTS effectively promotes more equitable decision-making while simultaneously improving predictive performance.
Problem

Research questions and friction points this paper is trying to address.

Machine Learning
Fairness
Classification Bias
Innovation

Methods, ideas, or system contributions that make the work stand out.

FairTTTS
BiasReduction
PredictionAccuracyEnhancement
🔎 Similar Papers
No similar papers found.