Argumentative Debates for Transparent Bias Detection [Technical Report]

📅 2025-08-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
AI fairness research has long overlooked transparency, despite the critical role of explainability in bias detection. Method: This paper proposes a transparency-oriented bias detection framework grounded in computational argumentation. It formalizes an argumentative structure that comparatively analyzes protected attributes—both for an individual instance and its local neighborhood—to yield interpretable, traceable, and qualitatively grounded bias assessments at the individual level, complemented by quantitative evaluation. Contribution/Results: To our knowledge, this is the first work to integrate formal argumentation and computational debate into fairness analysis, enabling cross-neighborhood bias reasoning. The method achieves high detection accuracy while providing structured, human-understandable explanations. Extensive experiments on multiple benchmark datasets demonstrate superior performance over state-of-the-art approaches in both bias identification accuracy and explanation quality.

Technology Category

Application Category

📝 Abstract
As the use of AI systems in society grows, addressing potential biases that emerge from data or are learned by models is essential to prevent systematic disadvantages against specific groups. Several notions of (un)fairness have been proposed in the literature, alongside corresponding algorithmic methods for detecting and mitigating unfairness, but, with very few exceptions, these tend to ignore transparency. Instead, interpretability and explainability are core requirements for algorithmic fairness, even more so than for other algorithmic solutions, given the human-oriented nature of fairness. In this paper, we contribute a novel interpretable, explainable method for bias detection relying on debates about the presence of bias against individuals, based on the values of protected features for the individuals and others in their neighbourhoods. Our method builds upon techniques from formal and computational argumentation, whereby debates result from arguing about biases within and across neighbourhoods. We provide formal, quantitative, and qualitative evaluations of our method, highlighting its strengths in performance against baselines, as well as its interpretability and explainability.
Problem

Research questions and friction points this paper is trying to address.

Detect bias in AI systems transparently using interpretable methods
Address fairness gaps by analyzing protected features in neighborhoods
Combine argumentation and computational techniques for bias identification
Innovation

Methods, ideas, or system contributions that make the work stand out.

Debate-based bias detection method
Formal computational argumentation techniques
Interpretable explainable fairness evaluation
🔎 Similar Papers
No similar papers found.