WangchanThaiInstruct: An instruction-following Dataset for Culture-Aware, Multitask, and Multi-domain Evaluation in Thai

📅 2025-08-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models exhibit limited instruction-following capabilities in low-resource languages such as Thai, and prevailing translation-based benchmarks neglect cultural adaptation and domain-specific expertise. Method: We construct the first natively Thai instruction dataset, covering four domains (e.g., education, healthcare) and seven task types, developed through collaborative multi-stage quality control involving annotators, domain experts, and AI researchers. Contribution/Results: The dataset exposes systematic deficiencies of translated data in culturally sensitive and professional tasks, and demonstrates that localization-informed supervision critically enhances model performance. Experiments show that models fine-tuned on our dataset significantly outperform translation-based baselines in both in-domain and cross-domain zero-shot evaluations. This confirms that high-quality, locally grounded instruction data is essential for improving cultural and domain alignment in low-resource language models.

Technology Category

Application Category

📝 Abstract
Large language models excel at instruction-following in English, but their performance in low-resource languages like Thai remains underexplored. Existing benchmarks often rely on translations, missing cultural and domain-specific nuances needed for real-world use. We present WangchanThaiInstruct, a human-authored Thai dataset for evaluation and instruction tuning, covering four professional domains and seven task types. Created through a multi-stage quality control process with annotators, domain experts, and AI researchers, WangchanThaiInstruct supports two studies: (1) a zero-shot evaluation showing performance gaps on culturally and professionally specific tasks, and (2) an instruction tuning study with ablations isolating the effect of native supervision. Models fine-tuned on WangchanThaiInstruct outperform those using translated data in both in-domain and out-of-domain benchmarks. These findings underscore the need for culturally and professionally grounded instruction data to improve LLM alignment in low-resource, linguistically diverse settings.
Problem

Research questions and friction points this paper is trying to address.

Evaluating instruction-following performance in Thai language models
Addressing cultural and domain-specific nuances missing in translated benchmarks
Improving LLM alignment with native Thai instruction data
Innovation

Methods, ideas, or system contributions that make the work stand out.

Human-authored Thai dataset for instruction tuning
Multi-stage quality control with expert annotators
Native supervision outperforms translated data benchmarks
🔎 Similar Papers
No similar papers found.