Measuring Hong Kong Massive Multi-Task Language Understanding

📅 2025-05-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing large language models lack systematic evaluation benchmarks tailored to Hong Kong’s bilingual (Traditional Chinese written language / Cantonese spoken language) and culturally specific contexts. Method: We introduce HKMMLU, the first Hong Kong–focused multi-task benchmark, comprising 26,698 multiple-choice questions across 66 domains and 90,550 Cantonese–Mandarin translation pairs. We propose a fine-grained, multidimensional evaluation framework integrating zero-shot and few-shot reasoning, machine translation quality assessment, and prompt attribution analysis. Contribution/Results: Empirical evaluation reveals that state-of-the-art models (e.g., DeepSeek-V3) achieve only ~75% accuracy on HKMMLU—substantially lower than their performance on MMLU or CMMLU—exposing critical deficits in Cantonese comprehension, semantic parsing of Traditional Chinese, and domain-specific local knowledge. HKMMLU establishes a novel paradigm and essential infrastructure for cross-lingual, culturally grounded LLM evaluation.

Technology Category

Application Category

📝 Abstract
Multilingual understanding is crucial for the cross-cultural applicability of Large Language Models (LLMs). However, evaluation benchmarks designed for Hong Kong's unique linguistic landscape, which combines Traditional Chinese script with Cantonese as the spoken form and its cultural context, remain underdeveloped. To address this gap, we introduce HKMMLU, a multi-task language understanding benchmark that evaluates Hong Kong's linguistic competence and socio-cultural knowledge. The HKMMLU includes 26,698 multi-choice questions across 66 subjects, organized into four categories: Science, Technology, Engineering, and Mathematics (STEM), Social Sciences, Humanities, and Other. To evaluate the multilingual understanding ability of LLMs, 90,550 Mandarin-Cantonese translation tasks were additionally included. We conduct comprehensive experiments on GPT-4o, Claude 3.7 Sonnet, and 18 open-source LLMs of varying sizes on HKMMLU. The results show that the best-performing model, DeepSeek-V3, struggles to achieve an accuracy of 75%, significantly lower than that of MMLU and CMMLU. This performance gap highlights the need to improve LLMs' capabilities in Hong Kong-specific language and knowledge domains. Furthermore, we investigate how question language, model size, prompting strategies, and question and reasoning token lengths affect model performance. We anticipate that HKMMLU will significantly advance the development of LLMs in multilingual and cross-cultural contexts, thereby enabling broader and more impactful applications.
Problem

Research questions and friction points this paper is trying to address.

Lack of evaluation benchmarks for Hong Kong's unique linguistic and cultural context
Need to assess multilingual understanding in Traditional Chinese and Cantonese
Performance gap in LLMs for Hong Kong-specific language and knowledge domains
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introduces HKMMLU benchmark for Hong Kong linguistic evaluation
Includes 26,698 multi-choice questions across 66 subjects
Evaluates models with Mandarin-Cantonese translation tasks
🔎 Similar Papers
No similar papers found.
Chuxue Cao
Chuxue Cao
Hong Kong University of Science and Technology
Z
Zhenghao Zhu
Hong Kong University of Science and Technology
J
Junqi Zhu
Hong Kong University of Science and Technology
G
Guoying Lu
Hong Kong University of Science and Technology
S
Siyu Peng
Hong Kong University of Science and Technology
J
Juntao Dai
Peking University
Weijie Shi
Weijie Shi
Hong Kong University of Science and Technology
Sirui Han
Sirui Han
The Hong Kong University of Science and Technology
Large Language ModelInterdisciplinary Artificial Intelligence
Y
Yike Guo
Hong Kong University of Science and Technology