KITE: A Benchmark for Evaluating Korean Instruction-Following Abilities in Large Language Models

📅 2025-10-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing LLM instruction-following benchmarks exhibit strong English bias and lack open, culturally grounded evaluation frameworks for Korean—particularly regarding its complex morphology, honorific system, syntactic features, and sociolinguistic conventions. Method: We introduce KITE, the first comprehensive, open Korean instruction-following benchmark, covering both general-purpose tasks and Korean-specific challenges (e.g., honorific adaptation, numeral system switching, context-sensitive politeness). KITE comprises a diverse instruction set spanning syntactic, morphological, and sociolinguistic dimensions, evaluated via a reproducible pipeline integrating automated metrics and human assessment. Contribution/Results: Experiments reveal substantial deficiencies in mainstream multilingual LLMs on Korean instruction understanding—especially in culturally nuanced tasks. All components—including the dataset, evaluation code, and analysis—are publicly released, establishing critical infrastructure for culturally inclusive, multilingual LLM evaluation and development.

Technology Category

Application Category

📝 Abstract
The instruction-following capabilities of large language models (LLMs) are pivotal for numerous applications, from conversational agents to complex reasoning systems. However, current evaluations predominantly focus on English models, neglecting the linguistic and cultural nuances of other languages. Specifically, Korean, with its distinct syntax, rich morphological features, honorific system, and dual numbering systems, lacks a dedicated benchmark for assessing open-ended instruction-following capabilities. To address this gap, we introduce the Korean Instruction-following Task Evaluation (KITE), a comprehensive benchmark designed to evaluate both general and Korean-specific instructions. Unlike existing Korean benchmarks that focus mainly on factual knowledge or multiple-choice testing, KITE directly targets diverse, open-ended instruction-following tasks. Our evaluation pipeline combines automated metrics with human assessments, revealing performance disparities across models and providing deeper insights into their strengths and weaknesses. By publicly releasing the KITE dataset and code, we aim to foster further research on culturally and linguistically inclusive LLM development and inspire similar endeavors for other underrepresented languages.
Problem

Research questions and friction points this paper is trying to address.

Evaluating Korean instruction-following abilities in large language models
Addressing the lack of Korean-specific benchmarks for open-ended instructions
Assessing linguistic and cultural nuances in Korean language models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Developed Korean instruction-following benchmark KITE
Combined automated metrics with human assessments
Released public dataset and code for research
🔎 Similar Papers
No similar papers found.
Dongjun Kim
Dongjun Kim
Stanford University
Machine LearningArtificial Intelligence
C
Chanhee Park
Department of Computer Science and Engineering, Korea University
Chanjun Park
Chanjun Park
Assistant Professor at Soongsil University
Natural Language ProcessingLarge Language ModelsMachine Translation
H
Heuiseok Lim
Department of Computer Science and Engineering, Korea University