TranslateGemma Technical Report

📅 2026-01-13
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
This work proposes a two-stage fine-tuning strategy to enhance the performance and efficiency of open-source large language models in multilingual machine translation, building upon the Gemma 3 base model. The approach first applies supervised fine-tuning using high-quality synthetic and human-curated parallel corpora, followed by a novel integration of reinforcement learning with multiple reward models—MetricX-QE and AutoMQM—applied for the first time to the Gemma series to optimize translation quality. While preserving multimodal capabilities, the method substantially improves translation performance even in smaller models. Experimental results demonstrate consistent and significant gains over baselines on both WMT25 human evaluation (across 10 language pairs) and WMT24++ automatic evaluation (across 55 language pairs), with the smaller models often matching or surpassing the performance of larger counterparts, and showing enhanced results on the Vistra image-to-text translation benchmark.

Technology Category

Application Category

📝 Abstract
We present TranslateGemma, a suite of open machine translation models based on the Gemma 3 foundation models. To enhance the inherent multilingual capabilities of Gemma 3 for the translation task, we employ a two-stage fine-tuning process. First, supervised fine-tuning is performed using a rich mixture of high-quality large-scale synthetic parallel data generated via state-of-the-art models and human-translated parallel data. This is followed by a reinforcement learning phase, where we optimize translation quality using an ensemble of reward models, including MetricX-QE and AutoMQM, targeting translation quality. We demonstrate the effectiveness of TranslateGemma with human evaluation on the WMT25 test set across 10 language pairs and with automatic evaluation on the WMT24++ benchmark across 55 language pairs. Automatic metrics show consistent and substantial gains over the baseline Gemma 3 models across all sizes. Notably, smaller TranslateGemma models often achieve performance comparable to larger baseline models, offering improved efficiency. We also show that TranslateGemma models retain strong multimodal capabilities, with enhanced performance on the Vistra image translation benchmark. The release of the open TranslateGemma models aims to provide the research community with powerful and adaptable tools for machine translation.
Problem

Research questions and friction points this paper is trying to address.

machine translation
multilingual
translation quality
foundation models
open models
Innovation

Methods, ideas, or system contributions that make the work stand out.

TranslateGemma
two-stage fine-tuning
reinforcement learning for translation
multilingual machine translation
open-source MT models
🔎 Similar Papers
No similar papers found.
M
Mara Finkelstein
Google Translate Research Team
Isaac Caswell
Isaac Caswell
Researcher, Google Translate
TranslationBack-TranslationLow-Resource TranslationTranslationese
Tobias Domhan
Tobias Domhan
Machine Learning Scientist, Amazon
Machine LearningMachine TranslationBayesian Optimization
Jan-Thorsten Peter
Jan-Thorsten Peter
Google
Neural Machine Translation
Juraj Juraska
Juraj Juraska
Google
Natural Language GenerationMachine TranslationDialogue SystemsConversational AI
Parker Riley
Parker Riley
Google Research
Natural Language ProcessingMachine Translation
Daniel Deutsch
Daniel Deutsch
University of Pennsylvania
natural language processingmachine learning
C
Cole Dilanni
Google Translate Research Team
Colin Cherry
Colin Cherry
Google Research
Natural Language ProcessingComputational LinguisticsMachine Translation
Eleftheria Briakou
Eleftheria Briakou
Research Scientist at Google Research
Machine TranslationMultilingual NLP
E
Elizabeth Nielsen
Google Translate Research Team
Jiaming Luo
Jiaming Luo
Shanghai Jiao Tong University
Dialogue SystemDigital Mental Health
K
Kat Black
Google Translate Research Team
R
Ryan Mullins
Google Translate Research Team
Sweta Agrawal
Sweta Agrawal
Research Scientist at Google
Machine TranslationNatural Language Generation and Evaluation
Wenda Xu
Wenda Xu
Google
LLM EvaluationLLM Alignment
E
Erin Kats
Google Translate Research Team
S
Stephane Jaskiewicz
Google Translate Research Team
Markus Freitag
Markus Freitag
Google
Multilingual LLMMachine TranslationMachine LearningNLP
David Vilar
David Vilar
Staff Research Scientist, Google
Machine TranslationMachine Learning