Evaluating Code-Mixing in LLMs Across 18 Languages

📅 2025-07-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing evaluation benchmarks (e.g., LinCE, GLUECoS) suffer from narrow language coverage and task monotony, failing to comprehensively assess large language models’ (LLMs) capabilities in cross-linguistic code-mixing—a linguistically diverse and structurally complex phenomenon—while high-quality synthetic multilingual code-mixed data generation remains underexplored. To address these gaps, we introduce the first comprehensive cross-linguistic code-mixing benchmark spanning seven language families and 18 languages, and propose a novel hybrid data synthesis method combining GPT-4–driven prompting with word-level lexical substitution, transcending conventional bilingual constraints. We systematically evaluate leading LLMs under zero-shot and few-shot settings across multiple NLP tasks. Results reveal consistently weak performance on cross-linguistic code-mixing, and identify model scale, pretraining data volume, and few-shot learning efficacy as critical determinants of success.

Technology Category

Application Category

📝 Abstract
Code-mixing, the practice of switching between languages within a conversation, presents unique challenges for traditional natural language processing. Existing benchmarks, such as LinCE and GLUECoS, are limited by narrow language pairings and tasks, failing to adequately evaluate the code-mixing capabilities of large language models (LLMs). Despite the significance of code-mixing for multilingual users, research on LLMs in this context remains limited. Additionally, current methods for generating code-mixed data are underdeveloped. In this paper, we conduct a comprehensive evaluation of LLMs' performance on code-mixed data across 18 languages from seven language families. We also propose a novel approach for generating synthetic code-mixed texts by combining word substitution with GPT-4 prompting. Our analysis reveals consistent underperformance of LLMs on code-mixed datasets involving multiple language families. We suggest that improvements in training data size, model scale, and few-shot learning could enhance their performance.
Problem

Research questions and friction points this paper is trying to address.

Evaluating LLMs' code-mixing ability across 18 languages
Addressing limitations in current code-mixed data benchmarks
Improving synthetic code-mixed text generation methods
Innovation

Methods, ideas, or system contributions that make the work stand out.

Evaluates LLMs on code-mixing across 18 languages
Proposes synthetic code-mixed text generation using GPT-4
Identifies LLM underperformance in multi-language families
🔎 Similar Papers
No similar papers found.