Open-DeBias: Toward Mitigating Open-Set Bias in Language Models

📅 2025-09-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Contemporary LLM-based question-answering systems frequently encode societal and stereotypical biases, and mainstream debiasing approaches are largely confined to closed-set categories, limiting their efficacy against open-domain, emerging, and cross-lingual biases. This paper introduces the first open-set bias detection and mitigation framework. First, we construct OpenBiasBench—a multilingual, open-domain evaluation benchmark explicitly designed to cover emerging and underrepresented biases. Second, we propose an adapter-based parameter-efficient fine-tuning method integrated with data-efficient debiasing strategies, enabling zero-shot cross-lingual and cross-task generalization. Third, our approach achieves a 48% absolute improvement in accuracy on the BBQ ambiguous subset and a 6% gain on the disambiguated subset. It attains 84% zero-shot transfer accuracy on the Korean BBQ benchmark and demonstrates robust performance across diverse tasks—including StereoSet—validating its generalizability and effectiveness.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) have achieved remarkable success on question answering (QA) tasks, yet they often encode harmful biases that compromise fairness and trustworthiness. Most existing bias mitigation approaches are restricted to predefined categories, limiting their ability to address novel or context-specific emergent biases. To bridge this gap, we tackle the novel problem of open-set bias detection and mitigation in text-based QA. We introduce OpenBiasBench, a comprehensive benchmark designed to evaluate biases across a wide range of categories and subgroups, encompassing both known and previously unseen biases. Additionally, we propose Open-DeBias, a novel, data-efficient, and parameter-efficient debiasing method that leverages adapter modules to mitigate existing social and stereotypical biases while generalizing to unseen ones. Compared to the state-of-the-art BMBI method, Open-DeBias improves QA accuracy on BBQ dataset by nearly $48%$ on ambiguous subsets and $6%$ on disambiguated ones, using adapters fine-tuned on just a small fraction of the training data. Remarkably, the same adapters, in a zero-shot transfer to Korean BBQ, achieve $84%$ accuracy, demonstrating robust language-agnostic generalization. Through extensive evaluation, we also validate the effectiveness of Open-DeBias across a broad range of NLP tasks, including StereoSet and CrowS-Pairs, highlighting its robustness, multilingual strength, and suitability for general-purpose, open-domain bias mitigation. The project page is available at: https://sites.google.com/view/open-debias25
Problem

Research questions and friction points this paper is trying to address.

Detecting and mitigating novel biases in language models
Addressing open-set bias beyond predefined categories in QA
Developing efficient debiasing methods for unseen social biases
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adapter modules enable efficient bias mitigation
Generalizes to unseen biases without full retraining
Achieves multilingual generalization with minimal training data
🔎 Similar Papers
No similar papers found.
A
Arti Rani
Mehta Family School of DS & AI, Indian Institute of Technology Roorkee, India
S
Shweta Singh
Mehta Family School of DS & AI, Indian Institute of Technology Roorkee, India
N
Nihar Ranjan Sahoo
Computer Science and Engineering, Indian Institute of Technology Bombay, India
Gaurav Kumar Nayak
Gaurav Kumar Nayak
Assistant Professor, IIT Roorkee
Machine LearningDeep Learning for Computer VisionData-efficient deep learningGenerative AI