🤖 AI Summary
This work investigates whether regionally specialized large language models (LLMs) genuinely possess indigenous cultural understanding, using India as a case study to assess the cultural alignment of Hindi LLMs versus global models along value systems and social practices. Method: We integrate four complementary frameworks—Inglehart-Welzel Value Map, GlobalOpinionQA, CulturalBench, and NormAd—to conduct cross-model, multi-dimensional empirical evaluation. Contribution/Results: Contrary to expectations, no Hindi regional model outperforms global counterparts; some U.S. lay participants even surpass regional models on Indian value prediction tasks. Crucially, regional fine-tuning degrades—not enhances—cultural competence and harms factual knowledge recall. We identify the scarcity of high-quality, untranslated cultural data as the fundamental bottleneck. The study pioneers the argument that cultural evaluation must be co-prioritized with multilingual benchmarking and introduces an open, reusable cultural assessment methodology.
📝 Abstract
Large language models (LLMs) are used around the world but exhibit Western cultural tendencies. To address this cultural misalignment, many countries have begun developing"regional"LLMs tailored to local communities. Yet it remains unclear whether these models merely speak the language of their users or also reflect their cultural values and practices. Using India as a case study, we evaluate five Indic and five global LLMs along two key dimensions: values (via the Inglehart-Welzel map and GlobalOpinionQA) and practices (via CulturalBench and NormAd). Across all four tasks, we find that Indic models do not align more closely with Indian cultural norms than global models. In fact, an average American person is a better proxy for Indian cultural values than any Indic model. Even prompting strategies fail to meaningfully improve alignment. Ablations show that regional fine-tuning does not enhance cultural competence and may in fact hurt it by impeding recall of existing knowledge. We trace this failure to the scarcity of high-quality, untranslated, and culturally grounded pretraining and fine-tuning data. Our study positions cultural evaluation as a first-class requirement alongside multilingual benchmarks and offers a reusable methodology for developers. We call for deeper investments in culturally representative data to build and evaluate truly sovereign LLMs.