NativQA Framework: Enabling LLMs with Native, Local, and Everyday Knowledge

📅 2025-04-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) suffer from pervasive cultural biases, insufficient fairness, and poor adaptability to low-resource languages and region-specific contexts. To address these challenges, we propose the first automated localization knowledge construction framework that synergizes user seed queries with web search engines. Our method integrates controllable query generation, geography-aware web crawling, multilingual text cleaning, and structured question-answer (QA) extraction—covering native knowledge from 39 locations across 24 countries and seven languages, including extremely low-resource ones. The resulting multilingual localized QA dataset comprises over 300,000 high-quality QA pairs, carefully curated to ensure cultural alignment, linguistic diversity, and authenticity of real-world usage scenarios. The dataset is publicly released to support LLM localization evaluation and fine-tuning, yielding significant improvements in model fairness and practical utility across diverse cultural and geographical settings.

Technology Category

Application Category

📝 Abstract
The rapid advancement of large language models (LLMs) has raised concerns about cultural bias, fairness, and their applicability in diverse linguistic and underrepresented regional contexts. To enhance and benchmark the capabilities of LLMs, there is a need to develop large-scale resources focused on multilingual, local, and cultural contexts. In this study, we propose a framework, NativQA, that can seamlessly construct large-scale, culturally and regionally aligned QA datasets in native languages. The framework utilizes user-defined seed queries and leverages search engines to collect location-specific, everyday information. It has been evaluated across 39 locations in 24 countries and in 7 languages, ranging from extremely low-resource to high-resource languages, which resulted over 300K Question Answer (QA) pairs. The developed resources can be used for LLM benchmarking and further fine-tuning. The framework has been made publicly available for the community (https://gitlab.com/nativqa/nativqa-framework).
Problem

Research questions and friction points this paper is trying to address.

Addressing cultural bias and fairness in LLMs across diverse regions
Creating multilingual QA datasets for underrepresented local contexts
Enhancing LLM capabilities with location-specific everyday knowledge
Innovation

Methods, ideas, or system contributions that make the work stand out.

Utilizes user-defined seed queries
Leverages search engines for local data
Constructs culturally aligned QA datasets
🔎 Similar Papers
No similar papers found.