Evaluating Large Language Models along Dimensions of Language Variation: A Systematik Invesdigatiom uv Cross-lingual Generalization

📅 2024-06-19
🏛️ Conference on Empirical Methods in Natural Language Processing
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) exhibit significant performance degradation on unseen dialects and low-resource languages (CRLs), yet the underlying causes and mechanisms remain poorly understood—compounded by scarce CRL test data and training corpora contaminated with closely related languages. Method: We propose a phonological-morphological-lexical distance framework grounded in Bayesian noise modeling, formalizing language variation as an interpretable, controllable generative process. Using synthetic artificial languages, we construct a controlled linguistic divergence hierarchy to disentangle the independent and interactive effects of multidimensional language distances on cross-lingual generalization. Results: The synthetic language degradation patterns align closely with real-world CRL–high-resource language (HRL) performance gaps. Our method enables predictive, annotation-free diagnostic evaluation of LLMs on CRLs, substantially reducing assessment cost. Furthermore, it reveals nuanced moderating roles of task type and HRL characteristics in performance degradation.

Technology Category

Application Category

📝 Abstract
While large language models exhibit certain cross-lingual generalization capabilities, they suffer from performance degradation (PD) on unseen closely-related languages (CRLs) and dialects relative to their high-resource language neighbour (HRLN). However, we currently lack a fundamental understanding of what kinds of linguistic distances contribute to PD, and to what extent. Furthermore, studies of cross-lingual generalization are confounded by unknown quantities of CRL language traces in the training data, and by the frequent lack of availability of evaluation data in lower-resource related languages and dialects. To address these issues, we model phonological, morphological, and lexical distance as Bayesian noise processes to synthesize artificial languages that are controllably distant from the HRLN. We analyse PD as a function of underlying noise parameters, offering insights on model robustness to isolated and composed linguistic phenomena, and the impact of task and HRL characteristics on PD. We calculate parameter posteriors on real CRL-HRLN pair data and show that they follow computed trends of artificial languages, demonstrating the viability of our noisers. Our framework offers a cheap solution for estimating task performance on an unseen CRL given HRLN performance using its posteriors, as well as for diagnosing observed PD on a CRL in terms of its linguistic distances from its HRLN, and opens doors to principled methods of mitigating performance degradation.
Problem

Research questions and friction points this paper is trying to address.

Dialectal Robustness
Language Model Performance
Resource-poor Languages
Innovation

Methods, ideas, or system contributions that make the work stand out.

Cost-effective Method
Performance Degradation Analysis
Artificial Language Simulation
🔎 Similar Papers
No similar papers found.