Evaluating Robustness of Large Language Models Against Multilingual Typographical Errors

📅 2025-10-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the lack of robustness in large language models (LLMs) when processing multilingual text with spelling errors. To this end, we propose MulTypo—the first multilingual typographical error generation algorithm grounded in language-specific keyboard layouts and empirically observed human typing behaviors—and construct a comprehensive evaluation framework covering five task categories: language reasoning, multiple-choice question answering, mathematical reasoning, machine translation, and text generation. Evaluating 18 open-source LLMs reveals that spelling errors substantially degrade performance, particularly in generative and reasoning tasks; high-resource languages exhibit greater robustness; English-to-X translation is more stable than X-to-English; and instruction tuning can unexpectedly exacerbate sensitivity to input noise. Our study fills a critical gap in multilingual robustness evaluation, providing both empirical evidence and practical tools essential for real-world LLM deployment.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) are increasingly deployed in multilingual, real-world applications with user inputs -- naturally introducing typographical errors (typos). Yet most benchmarks assume clean input, leaving the robustness of LLMs to typos across languages largely underexplored. To address this gap, we introduce MulTypo, a multilingual typo generation algorithm that simulates human-like errors based on language-specific keyboard layouts and typing behavior. We evaluate 18 open-source LLMs across three model families and five downstream tasks spanning language inference, multi-choice question answering, mathematical reasoning, and machine translation tasks. Our results show that typos consistently degrade performance, particularly in generative tasks and those requiring reasoning -- while the natural language inference task is comparatively more robust. Instruction tuning improves clean-input performance but may increase brittleness under noise. We also observe language-dependent robustness: high-resource languages are generally more robust than low-resource ones, and translation from English is more robust than translation into English. Our findings underscore the need for noise-aware training and multilingual robustness evaluation. We make our code and data publicly available.
Problem

Research questions and friction points this paper is trying to address.

Evaluating LLM robustness against multilingual typographical errors
Assessing performance degradation across diverse NLP tasks
Analyzing language-dependent robustness variations in LLMs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multilingual typo generation algorithm simulates human-like errors
Evaluated 18 LLMs across five downstream tasks
Proposed noise-aware training and multilingual robustness evaluation
🔎 Similar Papers
No similar papers found.