jina-embeddings-v5-text: Task-Targeted Embedding Distillation

πŸ“… 2026-02-17
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the limited performance of small-scale text embedding models on semantic similarity tasks by proposing a task-oriented embedding distillation framework that integrates knowledge distillation with task-specific contrastive loss. The proposed approach substantially outperforms baselines relying solely on distillation or contrastive learning, while supporting long texts (up to 32k tokens), multilingual inputs, and binary quantization with strong robustness. The released Small and Nano model variants achieve state-of-the-art or competitive performance among models of comparable size. Open-sourced model weights are well-suited for downstream applications such as retrieval, clustering, and classification.

Technology Category

Application Category

πŸ“ Abstract
Text embedding models are widely used for semantic similarity tasks, including information retrieval, clustering, and classification. General-purpose models are typically trained with single- or multi-stage processes using contrastive loss functions. We introduce a novel training regimen that combines model distillation techniques with task-specific contrastive loss to produce compact, high-performance embedding models. Our findings suggest that this approach is more effective for training small models than purely contrastive or distillation-based training paradigms alone. Benchmark scores for the resulting models, jina-embeddings-v5-text-small and jina-embeddings-v5-text-nano, exceed or match the state-of-the-art for models of similar size. jina-embeddings-v5-text models additionally support long texts (up to 32k tokens) in many languages, and generate embeddings that remain robust under truncation and binary quantization. Model weights are publicly available, hopefully inspiring further advances in embedding model development.
Problem

Research questions and friction points this paper is trying to address.

text embedding
semantic similarity
model distillation
contrastive loss
compact models
Innovation

Methods, ideas, or system contributions that make the work stand out.

embedding distillation
task-specific contrastive loss
compact embedding models
long-context embeddings
binary quantization robustness
πŸ”Ž Similar Papers
No similar papers found.
M
Mohammad Kalim Akram
Saba Sturua
Saba Sturua
ML Research Engineer
Natural Language ProcessingMachine Learning
N
Nastia Havriushenko
Q
Quentin Herreros
M
Michael GΓΌnther
M
Maximilian Werk
H
Han Xiao