🤖 AI Summary
To address the challenge of balancing performance, parameter count, and inference efficiency in lightweight text embedding models across multilingual, English, and code domains, this paper introduces an open-source embedding model based on the Gemma-3 architecture. Methodologically, we propose a novel integration of encoder-decoder initialization, geometric embedding distillation, spread-out regularization, and multi-checkpoint fusion—enabling efficient knowledge transfer from large language models and structural optimization of the embedding space within <500M parameters. Experiments demonstrate state-of-the-art (SOTA) results on all three MTEB sub-benchmarks (multilingual, English, and code), outperforming baseline models with over twice the parameter count. The model maintains robustness under quantization and dimensional truncation, significantly improving latency and throughput for edge deployment. It thus achieves an exceptional trade-off between cost-effectiveness and strong generalization across diverse modalities.
📝 Abstract
We introduce EmbeddingGemma, a new lightweight, open text embedding model based on the Gemma 3 language model family. Our innovative training recipe strategically captures knowledge from larger models via encoder-decoder initialization and geometric embedding distillation. We improve model robustness and expressiveness with a spread-out regularizer, and ensure generalizability by merging checkpoints from varied, optimized mixtures. Evaluated on the Massive Text Embedding Benchmark (MTEB) across multilingual, English, and code domains, EmbeddingGemma (300M) achieves state-of-the-art results. Notably, it outperforms prior top models, both proprietary and open, with fewer than 500M parameters, and provides performance comparable to models double its size, offering an exceptional performance-to-cost ratio. Remarkably, this lead persists when quantizing model weights or truncating embedding outputs. This makes EmbeddingGemma particularly well-suited for low-latency and high-throughput use cases such as on-device applications. We provide ablation studies exploring our key design choices. We release EmbeddingGemma to the community to promote further research.