🤖 AI Summary
This study addresses gender bias in low-resource Bengali, shaped by linguistic and cultural factors, moving beyond English-centric paradigms in bias detection. By integrating lexicon-based mining, classification models, cross-lingual translation comparisons, large language model generation, and fieldwork in rural and low-income communities, the research pioneers a synthesis of community-driven, contextually grounded data with computational methods to uncover the culturally specific manifestations of gender bias in Bengali. The findings underscore the necessity of context-sensitive and locally adapted frameworks for bias identification, offering a novel pathway toward developing equitable NLP systems for other low-resource Indo-Aryan languages.
📝 Abstract
Large Language Models (LLMs) have achieved significant success in recent years; yet, issues of intrinsic gender bias persist, especially in nonEnglish languages. Although current research mostly emphasizes English, the linguistic and cultural biases inherent in Global South languages, like Bengali, are little examined. This research seeks to examine the characteristics and magnitude of gender bias in Bengali, evaluating the efficacy of current approaches in identifying and alleviating Bias. We use several methods to extract gender-biased utterances, including lexicon-based mining, computational classification models, translation-based comparison analysis, and GPT-based bias creation. Our research indicates that the straight application of English-centric bias detection frameworks to Bengali is severely constrained by language disparities and socio-cultural factors that impact implicit biases. To tackle these difficulties, we executed two field investigations inside rural and lowincome areas, gathering authentic insights on gender Bias. The findings demonstrate that gender Bias in Bengali presents distinct characteristics relative to English, requiring a more localized and context-sensitive methodology. Additionally, our research emphasizes the need of integrating community-driven research approaches to identify culturally relevant biases often neglected by automated systems. Our research enhances the ongoing discussion around gender bias in AI by illustrating the need to create linguistic tools specifically designed for underrepresented languages. This study establishes a foundation for further investigations into bias reduction in Bengali and other Indic languages, promoting the development of more inclusive and fair NLP systems.