How Well Do LLMs Handle Cantonese? Benchmarking Cantonese Capabilities of Large Language Models

๐Ÿ“… 2024-08-29
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work addresses the longstanding underrepresentation of Cantonese in natural language processing (NLP) by introducing CantonEvalโ€”the first comprehensive, large language model (LLM) evaluation benchmark specifically designed for Cantonese. It assesses four core capabilities: factual generation, mathematical reasoning, complex reasoning, and commonsense understanding. Methodologically, we propose the first standardized Cantonese LLM evaluation framework, featuring a rigorously curated, high-difficulty, multi-dimensional test suite developed through human verification and domain-expert collaboration; all models are evaluated uniformly under zero-shot and few-shot prompting protocols. Experimental results reveal substantial deficiencies in current state-of-the-art LLMs regarding Cantonese dialectal grammar, region-specific commonsense knowledge, and numerical reasoning. CantonEval fills a critical gap in Cantonese NLP evaluation, providing a reproducible benchmark and concrete optimization directions for developing, training, and aligning open-source Cantonese LLMs.

Technology Category

Application Category

๐Ÿ“ Abstract
The rapid evolution of large language models (LLMs) has transformed the competitive landscape in natural language processing (NLP), particularly for English and other data-rich languages. However, underrepresented languages like Cantonese, spoken by over 85 million people, face significant development gaps, which is particularly concerning given the economic significance of the Guangdong-Hong Kong-Macau Greater Bay Area, and in substantial Cantonese-speaking populations in places like Singapore and North America. Despite its wide use, Cantonese has scant representation in NLP research, especially compared to other languages from similarly developed regions. To bridge these gaps, we outline current Cantonese NLP methods and introduce new benchmarks designed to evaluate LLM performance in factual generation, mathematical logic, complex reasoning, and general knowledge in Cantonese, which aim to advance open-source Cantonese LLM technology. We also propose future research directions and recommended models to enhance Cantonese LLM development.
Problem

Research questions and friction points this paper is trying to address.

Evaluate LLM performance in Cantonese
Address Cantonese NLP research gaps
Advance open-source Cantonese LLM technology
Innovation

Methods, ideas, or system contributions that make the work stand out.

Benchmarking Cantonese LLM performance
Developing new Cantonese NLP benchmarks
Advancing open-source Cantonese LLM technology
๐Ÿ”Ž Similar Papers
No similar papers found.