GenderCARE: A Comprehensive Framework for Assessing and Reducing Gender Bias in Large Language Models

๐Ÿ“… 2024-08-22
๐Ÿ›๏ธ Conference on Computer and Communications Security
๐Ÿ“ˆ Citations: 1
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work addresses the limited practicality, insufficient inclusivity, and bias-introduction risks of existing LLM gender bias evaluation benchmarks. We propose the first end-to-end gender fairness framework covering both assessment and debiasing. Methodologically: (1) we introduce a multidimensional fairness criterionโ€”first standardizing evaluation to include transgender and non-binary individuals; (2) we construct GenderPair, a paired benchmark enabling fine-grained, counterfactual-aware bias measurement; and (3) we design a synergistic debiasing paradigm combining counterfactual data augmentation with gender-aware directional fine-tuning. Our contributions include interpretable, robust, and task-preserving fairness optimization. Evaluated on 17 mainstream LLMs, our approach reduces gender bias by over 35% on average (up to >90%), while maintaining primary task performance within ยฑ2%. This significantly advances both fairness and practical utility of LLMs.

Technology Category

Application Category

๐Ÿ“ Abstract
Large language models (LLMs) have exhibited remarkable capabilities in natural language generation, but they have also been observed to magnify societal biases, particularly those related to gender. In response to this issue, several benchmarks have been proposed to assess gender bias in LLMs. However, these benchmarks often lack practical flexibility or inadvertently introduce biases. To address these shortcomings, we introduce GenderCARE, a comprehensive framework that encompasses innovative Criteria, bias Assessment, Reduction techniques, and Evaluation metrics for quantifying and mitigating gender bias in LLMs. To begin, we establish pioneering criteria for gender equality benchmarks, spanning dimensions such as inclusivity, diversity, explainability, objectivity, robustness, and realisticity. Guided by these criteria, we construct GenderPair, a novel pair-based benchmark designed to assess gender bias in LLMs comprehensively. Our benchmark provides standardized and realistic evaluations, including previously overlooked gender groups such as transgender and non-binary individuals. Furthermore, we develop effective debiasing techniques that incorporate counterfactual data augmentation and specialized fine-tuning strategies to reduce gender bias in LLMs without compromising their overall performance. Extensive experiments demonstrate a significant reduction in various gender bias benchmarks, with reductions peaking at over 90% and averaging above 35% across 17 different LLMs. Importantly, these reductions come with minimal variability in mainstream language tasks, remaining below 2%. By offering a realistic assessment and tailored reduction of gender biases, we hope that our GenderCARE can represent a significant step towards achieving fairness and equity in LLMs. More details are available at https://github.com/kstanghere/GenderCARE-ccs24.
Problem

Research questions and friction points this paper is trying to address.

Assessing gender bias in language models
Reducing bias without performance loss
Including diverse gender groups in evaluation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Comprehensive gender bias framework
Novel pair-based benchmark GenderPair
Effective debiasing techniques counterfactual augmentation
๐Ÿ”Ž Similar Papers
No similar papers found.
K
Kunsheng Tang
University of Science and Technology of China, Hefei, China
W
Wenbo Zhou
University of Science and Technology of China, Hefei, China
J
Jie Zhang
Nanyang Technological University, Singapore, Singapore
A
Aishan Liu
Beihang University, Beijing, China
Gelei Deng
Gelei Deng
Nanyang Technological University
CybersecuritySystem securityRobotics SecurityAI SecuritySoftware Testing
S
Shuai Li
University of Science and Technology of China, Hefei, China
P
Peigui Qi
University of Science and Technology of China, Hefei, China
W
Weiming Zhang
University of Science and Technology of China, Hefei, China
T
Tianwei Zhang
Nanyang Technological University, Singapore, Singapore
Nenghai Yu
Nenghai Yu
University of Science and Technology of China
Computer VisionArtificial IntelligenceInformation Hiding