TLUE: A Tibetan Language Understanding Evaluation Benchmark

📅 2025-03-15
📈 Citations: 0
✨ Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the lack of rigorous evaluation of large language models’ (LLMs) comprehension capabilities in low-resource languages—specifically Tibetan. To this end, we introduce TLUE, the first large-scale, multi-task, multi-dimensional Tibetan Language Understanding Evaluation benchmark. TLUE encompasses 67 comprehension subtasks across five domains and seven safety-related subtasks, constructed through rigorous human verification and collaborative design with domain experts. It establishes the first comprehensive, safety-aware, multi-task evaluation framework for Tibetan. Empirical evaluation reveals that mainstream LLMs perform significantly worse than random baselines on Tibetan tasks, underscoring their severe limitations in low-resource language understanding. TLUE is publicly released to serve as foundational infrastructure and a standardized evaluation paradigm for Tibetan AI research.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) have made tremendous progress in recent years, but low-resource languages, such as Tibetan, remain significantly underrepresented in their evaluation. Despite Tibetan being spoken by over seven million people, it has largely been neglected in the development and assessment of LLMs. To address this gap, we present TLUE (A Tibetan Language Understanding Evaluation Benchmark), the first large-scale benchmark for assessing LLMs' capabilities in Tibetan. TLUE comprises two major components: (1) a comprehensive multi-task understanding benchmark spanning 5 domains and 67 subdomains, and (2) a safety benchmark covering 7 subdomains. We evaluate a diverse set of state-of-the-art LLMs. Experimental results demonstrate that most LLMs perform below the random baseline, highlighting the considerable challenges LLMs face in processing Tibetan, a low-resource language. TLUE provides an essential foundation for driving future research and progress in Tibetan language understanding and underscores the need for greater inclusivity in LLM development.
Problem

Research questions and friction points this paper is trying to address.

Addresses underrepresentation of Tibetan in LLM evaluation.
Introduces TLUE, a benchmark for Tibetan language understanding.
Highlights poor LLM performance on low-resource Tibetan language.
Innovation

Methods, ideas, or system contributions that make the work stand out.

TLUE benchmark for Tibetan language evaluation
Multi-task and safety benchmarks for LLMs
Highlights LLM challenges with low-resource languages
🔎 Similar Papers
No similar papers found.