Beyond English: The Impact of Prompt Translation Strategies across Languages and Tasks in Multilingual LLMs

📅 2025-02-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Multilingual large language models (LLMs) commonly rely on English pre-translation, yet existing translation strategies lack systematic evaluation. Method: We propose the first modular prompt translation evaluation framework, decomposing prompts into four components—instructions, context, examples, and output—and systematically assess translation combinations across 35 languages and four tasks: question answering (QA), natural language inference (NLI), named entity recognition (NER), and summarization. Using cross-lingual benchmarks (e.g., XQuAD, XNLI), we evaluate with NLLB and Google Translate, under zero-shot and few-shot settings. Contribution/Results: Our analysis reveals intricate interactions among linguistic similarity, translation quality, and pretraining data scale. Crucially, full translation is suboptimal; translating only instructions and examples yields average QA accuracy gains of +4.2% for low-resource languages. Based on these findings, we derive an interpretable, task-adaptive, and resource-aware translation strategy recommendation guide.

Technology Category

Application Category

📝 Abstract
Despite advances in the multilingual capabilities of Large Language Models (LLMs) across diverse tasks, English remains the dominant language for LLM research and development. So, when working with a different language, this has led to the widespread practice of pre-translation, i.e., translating the task prompt into English before inference. Selective pre-translation, a more surgical approach, focuses on translating specific prompt components. However, its current use is sporagic and lacks a systematic research foundation. Consequently, the optimal pre-translation strategy for various multilingual settings and tasks remains unclear. In this work, we aim to uncover the optimal setup for pre-translation by systematically assessing its use. Specifically, we view the prompt as a modular entity, composed of four functional parts: instruction, context, examples, and output, either of which could be translated or not. We evaluate pre-translation strategies across 35 languages covering both low and high-resource languages, on various tasks including Question Answering (QA), Natural Language Inference (NLI), Named Entity Recognition (NER), and Abstractive Summarization. Our experiments show the impact of factors as similarity to English, translation quality and the size of pre-trained data, on the model performance with pre-translation. We suggest practical guidelines for choosing optimal strategies in various multilingual settings.
Problem

Research questions and friction points this paper is trying to address.

Optimal pre-translation strategies in multilingual LLMs
Impact of prompt translation across 35 languages
Guidelines for multilingual tasks in diverse settings
Innovation

Methods, ideas, or system contributions that make the work stand out.

Modular prompt translation strategy
Evaluation across 35 languages
Selective translation improves performance
🔎 Similar Papers
No similar papers found.
I
Itai Mondshine
Bar-Ilan University, Israel
T
Tzuf Paz-Argaman
Bar-Ilan University, Israel
Reut Tsarfaty
Reut Tsarfaty
Bar-Ilan University
Natural Language ProcessingComputational LinguisticsArtificial Inteligence