T5Gemma 2: Seeing, Reading, and Understanding Longer

📅 2025-12-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the need for lightweight, multilingual, multimodal large language models. We propose T5Gemma, an extensible encoder-decoder series built upon Gemma-3, supporting ultra-long contexts (≥32K tokens), cross-lingual understanding, and vision–language joint reasoning. Methodologically, we introduce the first cross-modal UL2 pretraining adaptation framework and propose tied word embeddings alongside merged attention—unifying self-attention and cross-attention within a single mechanism. Under comparable parameter scales (270M, 1B, and 4B), T5Gemma achieves pretraining efficiency on par with Gemma-3 while significantly outperforming it on downstream tasks after post-training. All models are fully open-sourced. Our contributions establish a new paradigm for efficient, scalable multimodal foundation model development—balancing architectural innovation, computational efficiency, and strong multilingual and cross-modal capabilities.

Technology Category

Application Category

📝 Abstract
We introduce T5Gemma 2, the next generation of the T5Gemma family of lightweight open encoder-decoder models, featuring strong multilingual, multimodal and long-context capabilities. T5Gemma 2 follows the adaptation recipe (via UL2) in T5Gemma -- adapting a pretrained decoder-only model into an encoder-decoder model, and extends it from text-only regime to multimodal based on the Gemma 3 models. We further propose two methods to improve the efficiency: tied word embedding that shares all embeddings across encoder and decoder, and merged attention that unifies decoder self- and cross-attention into a single joint module. Experiments demonstrate the generality of the adaptation strategy over architectures and modalities as well as the unique strength of the encoder-decoder architecture on long context modeling. Similar to T5Gemma, T5Gemma 2 yields comparable or better pretraining performance and significantly improved post-training performance than its Gemma 3 counterpart. We release the pretrained models (270M-270M, 1B-1B and 4B-4B) to the community for future research.
Problem

Research questions and friction points this paper is trying to address.

Adapting decoder-only models into efficient encoder-decoder architectures
Extending text-only models to handle multimodal and long-context inputs
Improving model efficiency through shared embeddings and merged attention
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adapts decoder-only model to encoder-decoder architecture
Introduces tied embeddings and merged attention for efficiency
Extends multimodal capabilities from text-only foundation
🔎 Similar Papers
No similar papers found.
B
Biao Zhang
Google DeepMind
Paul Suganthan
Paul Suganthan
Software Engineer at Google
Data ManagementData IntegrationData ScienceMachine LearningCrowdsourcing
G
Gaël Liu
Google DeepMind
I
Ilya Philippov
Google DeepMind
Sahil Dua
Sahil Dua
Google DeepMind
Large Language ModelsNatural Language ProcessingRepresentation LearningEmbeddings
B
Ben Hora
Google DeepMind
K
Kat Black
Google DeepMind
G
Gus Martins
Google DeepMind
O
Omar Sanseviero
Google DeepMind
Shreya Pathak
Shreya Pathak
Indian Institute of Technology Bombay
Computer Science
C
Cassidy Hardin
Google DeepMind
Francesco Visin
Francesco Visin
Senior Research Scientist at Google DeepMind
Model based Reinforcement Learning
J
Jiageng Zhang
Google DeepMind
K
Kathleen Kenealy
Google DeepMind
Q
Qin Yin
Google DeepMind
Olivier Lacombe
Olivier Lacombe
Google DeepMind
Armand Joulin
Armand Joulin
Google DeepMind
Machine Learning
T
Tris Warkentin
Google DeepMind
Adam Roberts
Adam Roberts
Google DeepMind
Machine LearningMusic GenerationComputer ScienceComputational Biology