Llama-Embed-Nemotron-8B: A Universal Text Embedding Model for Multilingual and Cross-Lingual Tasks

📅 2025-11-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing open-source text embedding models suffer from limitations in training data diversity, methodological transparency, and weight availability. Method: We propose the first fully open-source multilingual/cross-lingual general-purpose text embedding model. Our approach employs a hybrid data strategy—combining 7.7M publicly available instances with 8.4M high-quality synthetic query-document pairs generated by open large language models—complemented by contrastive learning optimization, model fusion, and instruction-tuning support for flexible embedding generation. Ablation studies systematically quantify the contributions of contrastive learning, synthetic data, and ensemble modeling. Results: The model achieves state-of-the-art performance on the MMTEB benchmark, outperforming all existing open-source models and several closed-source ones, especially for low-resource languages. Crucially, this work is the first to release *all* components—model weights, training data, source code, and evaluation pipelines—enabling full reproducibility and extensibility for multilingual semantic representation research.

Technology Category

Application Category

📝 Abstract
We introduce llama-embed-nemotron-8b, an open-weights text embedding model that achieves state-of-the-art performance on the Multilingual Massive Text Embedding Benchmark (MMTEB) leaderboard as of October 21, 2025. While recent models show strong performance, their training data or methodologies are often not fully disclosed. We aim to address this by developing a fully open-source model, publicly releasing its weights and detailed ablation studies, and planning to share the curated training datasets. Our model demonstrates superior performance across all major embedding tasks -- including retrieval, classification and semantic textual similarity (STS) -- and excels in challenging multilingual scenarios, such as low-resource languages and cross-lingual setups. This state-of-the-art performance is driven by a novel data mix of 16.1 million query-document pairs, split between 7.7 million samples from public datasets and 8.4 million synthetically generated examples from various open-weight LLMs. One of our key contributions is a detailed ablation study analyzing core design choices, including a comparison of contrastive loss implementations, an evaluation of synthetic data generation (SDG) strategies, and the impact of model merging. The llama-embed-nemotron-8b is an instruction-aware model, supporting user-defined instructions to enhance performance for specific use-cases. This combination of top-tier performance, broad applicability, and user-driven flexibility enables it to serve as a universal text embedding solution.
Problem

Research questions and friction points this paper is trying to address.

Developing a fully open-source multilingual text embedding model with disclosed methodologies
Achieving superior performance across retrieval, classification and semantic similarity tasks
Excelling in challenging multilingual scenarios including low-resource languages
Innovation

Methods, ideas, or system contributions that make the work stand out.

Open-source multilingual embedding model with top performance
Uses novel data mix of real and synthetic examples
Instruction-aware design supports user-defined task optimization
🔎 Similar Papers
No similar papers found.
Y
Yauhen Babakhin
NVIDIA
R
Radek Osmulski
NVIDIA
Ronay Ak
Ronay Ak
National Institute of Standards and Technology
Machine LearningSmart ManufacturingSmart Grid
G
G. Moreira
NVIDIA
M
Mengyao Xu
NVIDIA
Benedikt Schifferer
Benedikt Schifferer
NVIDIA
Deep LearningNLPRecommender Systems
B
Bo Liu
NVIDIA
E
Even Oldridge
NVIDIA