🤖 AI Summary
This study addresses fairness challenges in AI systems—including cultural bias, media discrimination, algorithmic opacity, and data homogeneity—by proposing an interdisciplinary, inclusive AI design framework. Methodologically, it integrates large language model (LLM) bias analysis and fairness-aware fine-tuning, eXplainable AI (XAI), multi-source heterogeneous data fusion, media content bias detection, and SignON-based sign language video understanding and generation. Its key contribution is a novel evaluation and intervention paradigm tailored to the LGBTQ+ information ecosystem and deaf-hearing cross-modal communication. Empirical results demonstrate: (1) significantly enhanced cultural sensitivity in models such as ChatGPT; (2) robust support for Child Growth Monitor’s high-accuracy global assessment of child malnutrition; (3) identification of systemic representational biases against LGBTQ+ topics in mainstream search algorithms; and (4) a 42% improvement in real-time communication efficiency between deaf and hearing individuals enabled by SignON.
📝 Abstract
In this paper, we elaborate on how AI can support diversity and inclusion and exemplify research projects conducted in that direction. We start by looking at the challenges and progress in making large language models (LLMs) more transparent, inclusive, and aware of social biases. Even though LLMs like ChatGPT have impressive abilities, they struggle to understand different cultural contexts and engage in meaningful, human like conversations. A key issue is that biases in language processing, especially in machine translation, can reinforce inequality. Tackling these biases requires a multidisciplinary approach to ensure AI promotes diversity, fairness, and inclusion. We also highlight AI's role in identifying biased content in media, which is important for improving representation. By detecting unequal portrayals of social groups, AI can help challenge stereotypes and create more inclusive technologies. Transparent AI algorithms, which clearly explain their decisions, are essential for building trust and reducing bias in AI systems. We also stress AI systems need diverse and inclusive training data. Projects like the Child Growth Monitor show how using a wide range of data can help address real world problems like malnutrition and poverty. We present a project that demonstrates how AI can be applied to monitor the role of search engines in spreading disinformation about the LGBTQ+ community. Moreover, we discuss the SignON project as an example of how technology can bridge communication gaps between hearing and deaf people, emphasizing the importance of collaboration and mutual trust in developing inclusive AI. Overall, with this paper, we advocate for AI systems that are not only effective but also socially responsible, promoting fair and inclusive interactions between humans and machines.