🤖 AI Summary
Large language models (LLMs) inherit both explicit and implicit social biases from their training data, compromising output fairness. To address this, we propose a dual-path automated bias identification framework: (1) explicit bias detection via established benchmarks (e.g., StereoSet, CrowSPairs), leveraging bag-of-words analysis and prompt engineering; and (2) implicit bias detection through data augmentation and targeted fine-tuning to enhance sensitivity to lexical-level stereotypical associations. Our key contribution lies in unifying the modeling of explicit and implicit biases and enabling fine-grained, quantifiable evaluation. Experiments on BERT and GPT-3.5 demonstrate that the optimized models achieve up to a 20% improvement on implicit bias benchmarks and exhibit significantly enhanced cross-dataset generalization. However, residual keyword dependency persists for gender-related biases, indicating room for further refinement in contextual bias modeling.
📝 Abstract
Large Language Models (LLMs) inherit explicit and implicit biases from their training datasets. Identifying and mitigating biases in LLMs is crucial to ensure fair outputs, as they can perpetuate harmful stereotypes and misinformation. This study highlights the need to address biases in LLMs amid growing generative AI. We studied bias-specific benchmarks such as StereoSet and CrowSPairs to evaluate the existence of various biases in multiple generative models such as BERT and GPT 3.5. We proposed an automated Bias-Identification Framework to recognize various social biases in LLMs such as gender, race, profession, and religion. We adopted a two-pronged approach to detect explicit and implicit biases in text data. Results indicated fine-tuned models struggle with gender biases but excelled at identifying and avoiding racial biases. Our findings illustrated that despite having some success, LLMs often over-relied on keywords. To illuminate the capability of the analyzed LLMs in detecting implicit biases, we employed Bag-of-Words analysis and unveiled indications of implicit stereotyping within the vocabulary. To bolster the model performance, we applied an enhancement strategy involving fine-tuning models using prompting techniques and data augmentation of the bias benchmarks. The fine-tuned models exhibited promising adaptability during cross-dataset testing and significantly enhanced performance on implicit bias benchmarks, with performance gains of up to 20%.