DIWALI - Diversity and Inclusivity aWare cuLture specific Items for India: Dataset and Assessment of LLMs for Cultural Text Adaptation in Indian Context

📅 2025-09-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing large language models (LLMs) exhibit poor cultural consistency—particularly at the sub-regional level—due to uneven coverage and superficial adaptation, while cultural evaluation remains hindered by coarse-grained, high-noise datasets. Method: We introduce IndiCSI, the first fine-grained, India-specific cultural dataset, covering 17 cultural dimensions across 36 sub-regions and comprising ~8,000 human-verified cultural concepts. We further propose a multidimensional evaluation framework integrating LLM-as-judge, multi-group human assessment, and quantitative analysis. Contribution/Results: Experiments reveal pronounced sub-regional biases and shallow cultural understanding in mainstream LLMs. We publicly release the dataset, evaluation code, and comprehensive results—establishing the first sub-regional–level benchmark for cultural alignment and inclusive AI.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) are widely used in various tasks and applications. However, despite their wide capabilities, they are shown to lack cultural alignment citep{ryan-etal-2024-unintended, alkhamissi-etal-2024-investigating} and produce biased generations cite{naous-etal-2024-beer} due to a lack of cultural knowledge and competence. Evaluation of LLMs for cultural awareness and alignment is particularly challenging due to the lack of proper evaluation metrics and unavailability of culturally grounded datasets representing the vast complexity of cultures at the regional and sub-regional levels. Existing datasets for culture specific items (CSIs) focus primarily on concepts at the regional level and may contain false positives. To address this issue, we introduce a novel CSI dataset for Indian culture, belonging to 17 cultural facets. The dataset comprises $sim$8k cultural concepts from 36 sub-regions. To measure the cultural competence of LLMs on a cultural text adaptation task, we evaluate the adaptations using the CSIs created, LLM as Judge, and human evaluations from diverse socio-demographic region. Furthermore, we perform quantitative analysis demonstrating selective sub-regional coverage and surface-level adaptations across all considered LLMs. Our dataset is available here: href{https://huggingface.co/datasets/nlip/DIWALI}{https://huggingface.co/datasets/nlip/DIWALI}, project webpagefootnote{href{https://nlip-lab.github.io/nlip/publications/diwali/}{https://nlip-lab.github.io/nlip/publications/diwali/}}, and our codebase with model outputs can be found here: href{https://github.com/pramitsahoo/culture-evaluation}{https://github.com/pramitsahoo/culture-evaluation}.
Problem

Research questions and friction points this paper is trying to address.

LLMs lack cultural alignment and produce biased outputs
Existing cultural datasets have limited regional coverage and accuracy
No proper evaluation metrics exist for Indian cultural competence
Innovation

Methods, ideas, or system contributions that make the work stand out.

Created Indian culture dataset with 8k concepts from 36 sub-regions
Evaluated LLMs using cultural text adaptation task with multiple methods
Assessed cultural competence across 17 cultural facets using diverse evaluations
🔎 Similar Papers
No similar papers found.