🤖 AI Summary
Large language models exhibit limited instruction-following capabilities in low-resource languages such as Thai, and prevailing translation-based benchmarks neglect cultural adaptation and domain-specific expertise. Method: We construct the first natively Thai instruction dataset, covering four domains (e.g., education, healthcare) and seven task types, developed through collaborative multi-stage quality control involving annotators, domain experts, and AI researchers. Contribution/Results: The dataset exposes systematic deficiencies of translated data in culturally sensitive and professional tasks, and demonstrates that localization-informed supervision critically enhances model performance. Experiments show that models fine-tuned on our dataset significantly outperform translation-based baselines in both in-domain and cross-domain zero-shot evaluations. This confirms that high-quality, locally grounded instruction data is essential for improving cultural and domain alignment in low-resource language models.
📝 Abstract
Large language models excel at instruction-following in English, but their performance in low-resource languages like Thai remains underexplored. Existing benchmarks often rely on translations, missing cultural and domain-specific nuances needed for real-world use. We present WangchanThaiInstruct, a human-authored Thai dataset for evaluation and instruction tuning, covering four professional domains and seven task types. Created through a multi-stage quality control process with annotators, domain experts, and AI researchers, WangchanThaiInstruct supports two studies: (1) a zero-shot evaluation showing performance gaps on culturally and professionally specific tasks, and (2) an instruction tuning study with ablations isolating the effect of native supervision. Models fine-tuned on WangchanThaiInstruct outperform those using translated data in both in-domain and out-of-domain benchmarks. These findings underscore the need for culturally and professionally grounded instruction data to improve LLM alignment in low-resource, linguistically diverse settings.