UNVEILING: What Makes Linguistics Olympiad Puzzles Tricky for LLMs?

πŸ“… 2025-08-15
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Large language models (LLMs) exhibit poor performance on Linguistics Olympiad (LO) puzzles in low-resource languages, primarily due to weak cross-lingual reasoning and insufficient handling of morphological complexity (e.g., inflection, agglutination). Method: We propose a linguistics-informed, fine-grained feature annotation framework to systematically diagnose model bottlenecks, and design a morphology-aware tokenization preprocessing method that explicitly segments complex words. Contribution/Results: Evaluated on a multilingual LO benchmark comprising 629 problems across 41 low-resource languages, our preprocessing significantly improves LLMs’ problem-solving accuracy. Crucially, linguistic features common in English (e.g., analyticity, fixed word order) demonstrate positive cross-lingual transfer. This work is the first to integrate interpretable, linguistically grounded feature analysis with task-specific structural preprocessing for LO, offering a novel pathway toward enhancing LLM robustness in complex reasoning over low-resource languages.

Technology Category

Application Category

πŸ“ Abstract
Large language models (LLMs) have demonstrated potential in reasoning tasks, but their performance on linguistics puzzles remains consistently poor. These puzzles, often derived from Linguistics Olympiad (LO) contests, provide a minimal contamination environment to assess LLMs' linguistic reasoning abilities across low-resource languages. This work analyses LLMs' performance on 629 problems across 41 low-resource languages by labelling each with linguistically informed features to unveil weaknesses. Our analyses show that LLMs struggle with puzzles involving higher morphological complexity and perform better on puzzles involving linguistic features that are also found in English. We also show that splitting words into morphemes as a pre-processing step improves solvability, indicating a need for more informed and language-specific tokenisers. These findings thus offer insights into some challenges in linguistic reasoning and modelling of low-resource languages.
Problem

Research questions and friction points this paper is trying to address.

Analyzing LLMs' poor performance on Linguistics Olympiad puzzles
Identifying weaknesses in linguistic reasoning across low-resource languages
Investigating morphological complexity and language-specific tokenization challenges
Innovation

Methods, ideas, or system contributions that make the work stand out.

Labeling puzzles with linguistic features
Splitting words into morphemes preprocessing
Developing language-specific tokenizers for LLMs
πŸ”Ž Similar Papers
No similar papers found.