🤖 AI Summary
Existing LLM instruction-following benchmarks exhibit strong English bias and lack open, culturally grounded evaluation frameworks for Korean—particularly regarding its complex morphology, honorific system, syntactic features, and sociolinguistic conventions. Method: We introduce KITE, the first comprehensive, open Korean instruction-following benchmark, covering both general-purpose tasks and Korean-specific challenges (e.g., honorific adaptation, numeral system switching, context-sensitive politeness). KITE comprises a diverse instruction set spanning syntactic, morphological, and sociolinguistic dimensions, evaluated via a reproducible pipeline integrating automated metrics and human assessment. Contribution/Results: Experiments reveal substantial deficiencies in mainstream multilingual LLMs on Korean instruction understanding—especially in culturally nuanced tasks. All components—including the dataset, evaluation code, and analysis—are publicly released, establishing critical infrastructure for culturally inclusive, multilingual LLM evaluation and development.
📝 Abstract
The instruction-following capabilities of large language models (LLMs) are pivotal for numerous applications, from conversational agents to complex reasoning systems. However, current evaluations predominantly focus on English models, neglecting the linguistic and cultural nuances of other languages. Specifically, Korean, with its distinct syntax, rich morphological features, honorific system, and dual numbering systems, lacks a dedicated benchmark for assessing open-ended instruction-following capabilities. To address this gap, we introduce the Korean Instruction-following Task Evaluation (KITE), a comprehensive benchmark designed to evaluate both general and Korean-specific instructions. Unlike existing Korean benchmarks that focus mainly on factual knowledge or multiple-choice testing, KITE directly targets diverse, open-ended instruction-following tasks. Our evaluation pipeline combines automated metrics with human assessments, revealing performance disparities across models and providing deeper insights into their strengths and weaknesses. By publicly releasing the KITE dataset and code, we aim to foster further research on culturally and linguistically inclusive LLM development and inspire similar endeavors for other underrepresented languages.