🤖 AI Summary
Existing large language models lack systematic evaluation benchmarks tailored to Hong Kong’s bilingual (Traditional Chinese written language / Cantonese spoken language) and culturally specific contexts.
Method: We introduce HKMMLU, the first Hong Kong–focused multi-task benchmark, comprising 26,698 multiple-choice questions across 66 domains and 90,550 Cantonese–Mandarin translation pairs. We propose a fine-grained, multidimensional evaluation framework integrating zero-shot and few-shot reasoning, machine translation quality assessment, and prompt attribution analysis.
Contribution/Results: Empirical evaluation reveals that state-of-the-art models (e.g., DeepSeek-V3) achieve only ~75% accuracy on HKMMLU—substantially lower than their performance on MMLU or CMMLU—exposing critical deficits in Cantonese comprehension, semantic parsing of Traditional Chinese, and domain-specific local knowledge. HKMMLU establishes a novel paradigm and essential infrastructure for cross-lingual, culturally grounded LLM evaluation.
📝 Abstract
Multilingual understanding is crucial for the cross-cultural applicability of Large Language Models (LLMs). However, evaluation benchmarks designed for Hong Kong's unique linguistic landscape, which combines Traditional Chinese script with Cantonese as the spoken form and its cultural context, remain underdeveloped. To address this gap, we introduce HKMMLU, a multi-task language understanding benchmark that evaluates Hong Kong's linguistic competence and socio-cultural knowledge. The HKMMLU includes 26,698 multi-choice questions across 66 subjects, organized into four categories: Science, Technology, Engineering, and Mathematics (STEM), Social Sciences, Humanities, and Other. To evaluate the multilingual understanding ability of LLMs, 90,550 Mandarin-Cantonese translation tasks were additionally included. We conduct comprehensive experiments on GPT-4o, Claude 3.7 Sonnet, and 18 open-source LLMs of varying sizes on HKMMLU. The results show that the best-performing model, DeepSeek-V3, struggles to achieve an accuracy of 75%, significantly lower than that of MMLU and CMMLU. This performance gap highlights the need to improve LLMs' capabilities in Hong Kong-specific language and knowledge domains. Furthermore, we investigate how question language, model size, prompting strategies, and question and reasoning token lengths affect model performance. We anticipate that HKMMLU will significantly advance the development of LLMs in multilingual and cross-cultural contexts, thereby enabling broader and more impactful applications.