Developing and Utilizing a Large-Scale Cantonese Dataset for Multi-Tasking in Large Language Models

📅 2025-03-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Cantonese, a low-resource language, suffers from a severe scarcity of high-quality textual data—stemming from Mandarin dominance, fragmented speaker communities, inconsistent encoding and input methods, pervasive code-mixing with English, and frequent code-switching. Method: We construct the first large-scale, multi-source heterogeneous Cantonese corpus comprising over 2 billion tokens, integrating open-source data, Hong Kong–based forums, Cantonese Wikipedia, and Common Crawl. We introduce novel preprocessing techniques: encoding-aware cleaning, content-sensitive deduplication, and robust code-switching filtering. Contribution/Results: Supervised fine-tuning on this corpus significantly enhances large language models’ Cantonese understanding and generation capabilities, achieving state-of-the-art performance across four Cantonese-specific benchmarks. Moreover, cross-lingual transfer yields consistent improvements on mainstream multilingual tasks—marking the first demonstration that low-resource Cantonese modeling can positively reinforce general multilingual competence.

Technology Category

Application Category

📝 Abstract
High-quality data resources play a crucial role in learning large language models (LLMs), particularly for low-resource languages like Cantonese. Despite having more than 85 million native speakers, Cantonese is still considered a low-resource language in the field of natural language processing (NLP) due to factors such as the dominance of Mandarin, lack of cohesion within the Cantonese-speaking community, diversity in character encoding and input methods, and the tendency of overseas Cantonese speakers to prefer using English. In addition, rich colloquial vocabulary of Cantonese, English loanwords, and code-switching characteristics add to the complexity of corpus collection and processing. To address these challenges, we collect Cantonese texts from a variety of sources, including open source corpora, Hong Kong-specific forums, Wikipedia, and Common Crawl data. We conduct rigorous data processing through language filtering, quality filtering, content filtering, and de-duplication steps, successfully constructing a high-quality Cantonese corpus of over 2 billion tokens for training large language models. We further refined the model through supervised fine-tuning (SFT) on curated Cantonese tasks, enhancing its ability to handle specific applications. Upon completion of the training, the model achieves state-of-the-art (SOTA) performance on four Cantonese benchmarks. After training on our dataset, the model also exhibits improved performance on other mainstream language tasks.
Problem

Research questions and friction points this paper is trying to address.

Addressing low-resource challenges for Cantonese in NLP.
Building a high-quality Cantonese corpus for LLM training.
Enhancing model performance on Cantonese and mainstream tasks.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Collected diverse Cantonese texts for corpus
Applied rigorous data filtering and processing
Enhanced model with supervised fine-tuning
🔎 Similar Papers
No similar papers found.
J
Jiyue Jiang
The Chinese University of Hong Kong
A
Alfred Kar Yin Truong
The University of Hong Kong
Y
Yanyu Chen
The Chinese University of Hong Kong
Qinghang Bao
Qinghang Bao
The University of Hong Kong
Drug Development
S
Sheng Wang
The University of Hong Kong
P
Pengan Chen
The University of Hong Kong
J
Jiuming Wang
The Chinese University of Hong Kong
Lingpeng Kong
Lingpeng Kong
Google DeepMind, The University of Hong Kong
Natural Language ProcessingMachine Learning
Y
Yu Li
The Chinese University of Hong Kong
Chuan Wu
Chuan Wu
Professor of Computer Science, The University of Hong Kong
cloud computingdistributed machine learning algorithms and systems