๐ค AI Summary
This work addresses the challenge of simultaneously achieving high efficiency and quality in bilingual (ChineseโEnglish) text embedding models. We propose a lightweight modeling approach that integrates knowledge distillation with dynamic token compression. Our core innovation is a learnable, one-dimensional convolution-based token compression module that enables input-adaptive, dynamic compression ratios. To preserve semantic discriminability, we enhance compressed representations via contrastive learning and jointly optimize the model on bilingual corpora. Experimental results demonstrate that our 600M-parameter model matches the performance of an 8B-parameter baseline across multiple bilingual retrieval and semantic similarity benchmarks, while achieving a 2.3ร speedup in inference latency and reducing memory footprint by 64%. It significantly outperforms conventional models of comparable size.
๐ Abstract
This technical report presents the training methodology and evaluation results of the open-source Jasper-Token-Compression-600M model, released in November 2025. Building on previous distillation-based recipes from the English Stella and Jasper models, we successfully extend this approach to a bilingual (English and Chinese) domain, further enhancing model performance through the incorporation of contrastive learning. A key innovation of our model is the introduction of a one-dimensional convolution-based token compression module. We dynamically adjust the compression rate during training, enabling the model to learn more robust and efficient compressed text representations. By combining knowledge distillation with token compression techniques, we achieve significant improvements in both embedding quality and inference efficiency. Our model performs with higher efficiency than a traditional 0.6B model while achieving performance comparable to that of an 8B model. For more information on the model release, visit: https://huggingface.co/infgrad/Jasper-Token-Compression-600M.