EuroBERT: Scaling Multilingual Encoders for European Languages

📅 2025-03-07
📈 Citations: 0
✹ Influential: 0
📄 PDF
đŸ€– AI Summary
With the rise of generative decoders, multilingual bidirectional encoders have become increasingly marginalized. Method: This paper introduces the EuroBERT family of multilingual encoders, designed to enhance coverage of major European and global languages and improve multitask capabilities. It systematically integrates long-context modeling (native 8,192-token sequences), dynamic curriculum learning, and multi-stage hybrid pretraining—incorporating high-quality European language data, mathematical texts, and code—into the RoBERTa architecture. Contribution/Results: EuroBERT achieves substantial gains in cross-lingual generalization and domain transfer, consistently outperforming mBERT, XLM-R, and multilingual LLaMA variants on multilingual understanding, mathematical reasoning, and code generation benchmarks. All models and the training framework are fully open-sourced, offering a renewed pathway for encoder-based paradigms in retrieval, classification, mathematical reasoning, and programming tasks.

Technology Category

Application Category

📝 Abstract
General-purpose multilingual vector representations, used in retrieval, regression and classification, are traditionally obtained from bidirectional encoder models. Despite their wide applicability, encoders have been recently overshadowed by advances in generative decoder-only models. However, many innovations driving this progress are not inherently tied to decoders. In this paper, we revisit the development of multilingual encoders through the lens of these advances, and introduce EuroBERT, a family of multilingual encoders covering European and widely spoken global languages. Our models outperform existing alternatives across a diverse range of tasks, spanning multilingual capabilities, mathematics, and coding, and natively supporting sequences of up to 8,192 tokens. We also examine the design decisions behind EuroBERT, offering insights into our dataset composition and training pipeline. We publicly release the EuroBERT models, including intermediate training checkpoints, together with our training framework.
Problem

Research questions and friction points this paper is trying to address.

Develop multilingual encoders for European and global languages.
Enhance performance in multilingual tasks, mathematics, and coding.
Support long sequences up to 8,192 tokens efficiently.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Develops EuroBERT multilingual encoder models
Supports sequences up to 8,192 tokens
Publicly releases models and training framework
🔎 Similar Papers
No similar papers found.
Nicolas Boizard
Nicolas Boizard
PhD Studdent @University of Paris-Saclay (MICS - CentraleSupelec) x Diabolocom
NLPArtificial Intelligence
Hippolyte Gisserot-Boukhlef
Hippolyte Gisserot-Boukhlef
PhD Candidate, CentraleSupélec, Université Paris-Saclay
Artificial IntelligenceLLMs
Duarte M. Alves
Duarte M. Alves
PhD Student, Instituto Superior Técnico, Lisbon
Natural Language ProcessingMachine Learning
A
André Martins
Unbabel; Instituto Superior Técnico & Universidade de Lisboa (Lisbon ELLIS Unit); Unbabel
A
Ayoub Hammal
CNRS; LISN; Unbabel
Caio Corro
Caio Corro
INSA Rennes, IRISA
Natural Language ProcessingStructured Prediction
C
C'eline Hudelot
MICS, CentraleSupélec, Université Paris-Saclay
E
Emmanuel Malherbe
Artefact
E
Etienne Malaboeuf
CINES
Fanny Jourdan
Fanny Jourdan
Researcher at IRT Saint Exupéry
Natural Language ProcessingExplainabilityFairnessInterpretability
G
Gabriel Hautreux
CINES
J
Joao Alves
Unbabel; Instituto Superior Técnico & Universidade de Lisboa (Lisbon ELLIS Unit)
K
Kevin El-Haddad
Diabolocom; ISIA Lab
Manuel Faysse
Manuel Faysse
CentraleSupélec - Université Paris Saclay
Natural Language ProcessingMachine LearningPrivacy
Maxime Peyrard
Maxime Peyrard
Université Grenoble Alpes
NLPMachine LearningData Science
N
Nuno M. Guerreiro
MICS, CentraleSupélec, Université Paris-Saclay; Instituto Superior Técnico & Universidade de Lisboa (Lisbon ELLIS Unit); Unbabel
Patrick Fernandes
Patrick Fernandes
Carnegie Mellon University & Instituto Superior Técnico
NLPMachine Learning
Ricardo Rei
Ricardo Rei
Sword Health
Healthcare AIMachine LearningNatural Language ProcessingLarge Language Models
Pierre Colombo
Pierre Colombo
CS of Equall & Ass. Prof @Univ ParisSacaly (CentraleSupelec)
NLPMultimodal