How Well Do Large Reasoning Models Translate? A Comprehensive Evaluation for Multi-Domain Machine Translation

📅 2025-05-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study systematically evaluates the capabilities of Large Reasoning Models (LRMs) for machine translation (MT), focusing on high-difficulty professional domains characterized by semantic complexity, high terminology density, and long-context requirements—spanning 15 domains and four translation directions—and benchmarks them against conventional large language models (LLMs). Method: We propose a domain-adaptive prompting strategy to elicit structured reasoning and introduce an enhanced, multi-dimensional MQM-based human evaluation framework incorporating task difficulty stratification, input length control, and terminology density modeling. Contribution/Results: Experiments demonstrate that LRMs achieve an average +3.2 MT-BLEU gain in semantically complex domains and markedly improve long-text translation quality. Domain-adaptive prompting reduces critical error rates by 27% in key domains, effectively unlocking LRMs’ reasoning potential for specialized MT tasks.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) have demonstrated strong performance in general-purpose machine translation, but their effectiveness in complex, domain-sensitive translation tasks remains underexplored. Recent advancements in Large Reasoning Models (LRMs), raise the question of whether structured reasoning can enhance translation quality across diverse domains. In this work, we compare the performance of LRMs with traditional LLMs across 15 representative domains and four translation directions. Our evaluation considers various factors, including task difficulty, input length, and terminology density. We use a combination of automatic metrics and an enhanced MQM-based evaluation hierarchy to assess translation quality. Our findings show that LRMs consistently outperform traditional LLMs in semantically complex domains, especially in long-text and high-difficulty translation scenarios. Moreover, domain-adaptive prompting strategies further improve performance by better leveraging the reasoning capabilities of LRMs. These results highlight the potential of structured reasoning in MDMT tasks and provide valuable insights for optimizing translation systems in domain-sensitive contexts.
Problem

Research questions and friction points this paper is trying to address.

Evaluating LRMs versus LLMs in multi-domain machine translation
Assessing structured reasoning impact on domain-sensitive translation quality
Exploring domain-adaptive prompting for improved LRM translation performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

LRMs outperform LLMs in complex domains
Domain-adaptive prompting enhances LRM performance
Structured reasoning improves multi-domain translation
🔎 Similar Papers
No similar papers found.