🤖 AI Summary
Gemma 3 Technical Report addresses key limitations of lightweight open-source models: weak multimodal capabilities, high memory overhead for long-context processing, and insufficient multilingual support and reasoning performance. To tackle these challenges, the report introduces three core innovations: (1) a novel architecture with tunable ratios of local-to-global attention layers, drastically reducing KV cache memory consumption; (2) a multi-stage post-training paradigm integrating multimodal modeling, sparse attention, knowledge distillation, and instruction tuning—unifying improvements across visual understanding, 128K-context processing, mathematical reasoning, dialogue, and multilingual tasks; and (3) the fully open-sourced Gemma 3 series (4B and 27B variants), where Gemma3-4B-IT matches Gemma2-27B-IT in performance, and Gemma3-27B-IT approaches Gemini-1.5-Pro on multiple benchmarks—enabling efficient deployment in resource-constrained environments.
📝 Abstract
We introduce Gemma 3, a multimodal addition to the Gemma family of lightweight open models, ranging in scale from 1 to 27 billion parameters. This version introduces vision understanding abilities, a wider coverage of languages and longer context - at least 128K tokens. We also change the architecture of the model to reduce the KV-cache memory that tends to explode with long context. This is achieved by increasing the ratio of local to global attention layers, and keeping the span on local attention short. The Gemma 3 models are trained with distillation and achieve superior performance to Gemma 2 for both pre-trained and instruction finetuned versions. In particular, our novel post-training recipe significantly improves the math, chat, instruction-following and multilingual abilities, making Gemma3-4B-IT competitive with Gemma2-27B-IT and Gemma3-27B-IT comparable to Gemini-1.5-Pro across benchmarks. We release all our models to the community.