π€ AI Summary
Large language models (LLMs) exhibit poor performance on Linguistics Olympiad (LO) puzzles in low-resource languages, primarily due to weak cross-lingual reasoning and insufficient handling of morphological complexity (e.g., inflection, agglutination).
Method: We propose a linguistics-informed, fine-grained feature annotation framework to systematically diagnose model bottlenecks, and design a morphology-aware tokenization preprocessing method that explicitly segments complex words.
Contribution/Results: Evaluated on a multilingual LO benchmark comprising 629 problems across 41 low-resource languages, our preprocessing significantly improves LLMsβ problem-solving accuracy. Crucially, linguistic features common in English (e.g., analyticity, fixed word order) demonstrate positive cross-lingual transfer. This work is the first to integrate interpretable, linguistically grounded feature analysis with task-specific structural preprocessing for LO, offering a novel pathway toward enhancing LLM robustness in complex reasoning over low-resource languages.
π Abstract
Large language models (LLMs) have demonstrated potential in reasoning tasks, but their performance on linguistics puzzles remains consistently poor. These puzzles, often derived from Linguistics Olympiad (LO) contests, provide a minimal contamination environment to assess LLMs' linguistic reasoning abilities across low-resource languages. This work analyses LLMs' performance on 629 problems across 41 low-resource languages by labelling each with linguistically informed features to unveil weaknesses. Our analyses show that LLMs struggle with puzzles involving higher morphological complexity and perform better on puzzles involving linguistic features that are also found in English. We also show that splitting words into morphemes as a pre-processing step improves solvability, indicating a need for more informed and language-specific tokenisers. These findings thus offer insights into some challenges in linguistic reasoning and modelling of low-resource languages.