🤖 AI Summary
This study addresses annotation bias in multilingual large language models (LLMs), arising from task design flaws, annotator subjectivity, and cultural misalignment. Method: We propose the first multilingual annotation bias taxonomy integrating instruction, annotator, and cultural dimensions, and develop a novel integrated mitigation framework combining inter-annotator agreement analysis, model disagreement detection, meta-analysis, cross-lingual model discrepancy modeling, and cultural reasoning to establish a quantifiable multilingual bias detection metric system. Contribution/Results: (1) We systematically uncover the generative mechanisms of annotation bias under cultural diversity; (2) we provide a scalable, culturally sensitive paradigm for bias identification and mitigation; and (3) we ethically reconfigure multilingual data annotation workflows. Our approach significantly enhances model fairness and cross-cultural robustness, offering both theoretical foundations and practical pathways for responsible multilingual AI development.
📝 Abstract
Annotation bias in NLP datasets remains a major challenge for developing multilingual Large Language Models (LLMs), particularly in culturally diverse settings. Bias from task framing, annotator subjectivity, and cultural mismatches can distort model outputs and exacerbate social harms. We propose a comprehensive framework for understanding annotation bias, distinguishing among instruction bias, annotator bias, and contextual and cultural bias. We review detection methods (including inter-annotator agreement, model disagreement, and metadata analysis) and highlight emerging techniques such as multilingual model divergence and cultural inference. We further outline proactive and reactive mitigation strategies, including diverse annotator recruitment, iterative guideline refinement, and post-hoc model adjustments. Our contributions include: (1) a typology of annotation bias; (2) a synthesis of detection metrics; (3) an ensemble-based bias mitigation approach adapted for multilingual settings, and (4) an ethical analysis of annotation processes. Together, these insights aim to inform more equitable and culturally grounded annotation pipelines for LLMs.