FilBench: Can LLMs Understand and Generate Filipino?

📅 2025-08-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing large language model (LLM) evaluations lack dedicated, standardized benchmarks for low-resource Austronesian languages—particularly Filipino, Tagalog, and Cebuano—hindering rigorous assessment of linguistic competence and cultural grounding. Method: We introduce FilBench, the first multi-task evaluation benchmark specifically designed for Philippine languages, covering four dimensions: cultural commonsense reasoning, reading comprehension, text generation, and machine translation. It is constructed from human-curated, linguistically validated data and used to systematically evaluate 27 state-of-the-art and Southeast Asia–specialized models, including GPT-4o and SEA-LION v3 70B. Contribution/Results: Evaluation reveals substantial performance gaps: GPT-4o achieves only 72.23/100, while SEA-LION v3 70B scores 61.07, underscoring persistent deficiencies in local language understanding and generation. FilBench establishes a foundational, empirically grounded evaluation standard, addressing a critical regional gap and enabling targeted development and assessment of LLMs for under-resourced Austronesian languages.

Technology Category

Application Category

📝 Abstract
Despite the impressive performance of LLMs on English-based tasks, little is known about their capabilities in specific languages such as Filipino. In this work, we address this gap by introducing FilBench, a Filipino-centric benchmark designed to evaluate LLMs across a diverse set of tasks and capabilities in Filipino, Tagalog, and Cebuano. We carefully curate the tasks in FilBench to reflect the priorities and trends of NLP research in the Philippines such as Cultural Knowledge, Classical NLP, Reading Comprehension, and Generation. By evaluating 27 state-of-the-art LLMs on FilBench, we find that several LLMs suffer from reading comprehension and translation capabilities. Our results indicate that FilBench is challenging, with the best model, GPT-4o, achieving only a score of 72.23%. Moreover, we also find that models trained specifically for Southeast Asian languages tend to underperform on FilBench, with the highest-performing model, SEA-LION v3 70B, achieving only a score of 61.07%. Our work demonstrates the value of curating language-specific LLM benchmarks to aid in driving progress on Filipino NLP and increasing the inclusion of Philippine languages in LLM development.
Problem

Research questions and friction points this paper is trying to address.

Assessing LLMs' understanding and generation of Filipino languages
Evaluating LLMs' performance on diverse Filipino NLP tasks
Identifying gaps in Southeast Asian language model capabilities
Innovation

Methods, ideas, or system contributions that make the work stand out.

FilBench evaluates LLMs in Filipino languages
Includes Cultural Knowledge and NLP tasks
Highlights gaps in Southeast Asian LLMs
🔎 Similar Papers
2024-09-20Pacific Asia Conference on Language, Information and ComputationCitations: 3
Lester James V. Miranda
Lester James V. Miranda
University of Cambridge
Natural Language ProcessingMachine Learning
E
Elyanah Aco
Nara Institute of Science and Technology
C
Conner Manuel
Together AI
Jan Christian Blaise Cruz
Jan Christian Blaise Cruz
MBZUAI, McGill University, Mila - Quebec AI Institute
Natural Language ProcessingTranslationMultilingualityLow-resource LanguagesCode Switching
J
Joseph Marvin Imperial
SEACrowd, University of Bath, National University, Philippines