🤖 AI Summary
Existing evaluation benchmarks (e.g., LinCE, GLUECoS) suffer from narrow language coverage and task monotony, failing to comprehensively assess large language models’ (LLMs) capabilities in cross-linguistic code-mixing—a linguistically diverse and structurally complex phenomenon—while high-quality synthetic multilingual code-mixed data generation remains underexplored. To address these gaps, we introduce the first comprehensive cross-linguistic code-mixing benchmark spanning seven language families and 18 languages, and propose a novel hybrid data synthesis method combining GPT-4–driven prompting with word-level lexical substitution, transcending conventional bilingual constraints. We systematically evaluate leading LLMs under zero-shot and few-shot settings across multiple NLP tasks. Results reveal consistently weak performance on cross-linguistic code-mixing, and identify model scale, pretraining data volume, and few-shot learning efficacy as critical determinants of success.
📝 Abstract
Code-mixing, the practice of switching between languages within a conversation, presents unique challenges for traditional natural language processing. Existing benchmarks, such as LinCE and GLUECoS, are limited by narrow language pairings and tasks, failing to adequately evaluate the code-mixing capabilities of large language models (LLMs). Despite the significance of code-mixing for multilingual users, research on LLMs in this context remains limited. Additionally, current methods for generating code-mixed data are underdeveloped. In this paper, we conduct a comprehensive evaluation of LLMs' performance on code-mixed data across 18 languages from seven language families. We also propose a novel approach for generating synthetic code-mixed texts by combining word substitution with GPT-4 prompting. Our analysis reveals consistent underperformance of LLMs on code-mixed datasets involving multiple language families. We suggest that improvements in training data size, model scale, and few-shot learning could enhance their performance.