🤖 AI Summary
This study investigates the independent and interactive effects of datasets, developer background, and pre-trained models on gender, religious, and national origin biases in Bangla (a low-resource language) sentiment analysis. Method: We fine-tune mBERT and BanglaBERT on all Bangla sentiment datasets indexed by Google Dataset Search and develop a quantitative bias auditing framework. Contribution/Results: All models exhibit significant cross-identity biases—unrelated to semantic consistency—indicating pervasive representation and evaluation inequities. Insufficient demographic diversity among developers and high uncertainty in data–model pairings substantially amplify measurement bias. This work is the first to disentangle the causal mechanisms of these three factors in a low-resource language setting. By linking algorithmic bias auditing to epistemic injustice and AI alignment, it exposes critical methodological pitfalls in current bias assessment practices and proposes rigorous, reproducible protocols for algorithmic auditing.
📝 Abstract
Sociotechnical systems, such as language technologies, frequently exhibit identity-based biases. These biases exacerbate the experiences of historically marginalized communities and remain understudied in low-resource contexts. While models and datasets specific to a language or with multilingual support are commonly recommended to address these biases, this paper empirically tests the effectiveness of such approaches in the context of gender, religion, and nationality-based identities in Bengali, a widely spoken but low-resourced language. We conducted an algorithmic audit of sentiment analysis models built on mBERT and BanglaBERT, which were fine-tuned using all Bengali sentiment analysis (BSA) datasets from Google Dataset Search. Our analyses showed that BSA models exhibit biases across different identity categories despite having similar semantic content and structure. We also examined the inconsistencies and uncertainties arising from combining pre-trained models and datasets created by individuals from diverse demographic backgrounds. We connected these findings to the broader discussions on epistemic injustice, AI alignment, and methodological decisions in algorithmic audits.