NeoBERT: A Next-Generation BERT

📅 2025-02-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing bidirectional encoders—such as BERT and RoBERTa—lag behind modern autoregressive LLMs (e.g., LLaMA, DeepSeek) in architectural innovation, pretraining paradigms, and contextual capacity. To address this gap, we propose NeoBERT: a next-generation, efficient bidirectional encoder tailored to contemporary NLP demands. Its core innovations include: (1) the first unified integration of an empirically optimal depth-to-width ratio, 4096-token context modeling, modern pretraining corpora, and a compact 250M-parameter scale; (2) an enhanced Transformer architecture; and (3) evaluation under a unified MTEB/GLUE fine-tuning benchmark. Experiments demonstrate that NeoBERT achieves state-of-the-art performance on the MTEB benchmark, outperforming BERT-large, RoBERTa-large, and other leading encoders under identical fine-tuning protocols. All code, data, and model checkpoints are publicly released.

Technology Category

Application Category

📝 Abstract
Recent innovations in architecture, pre-training, and fine-tuning have led to the remarkable in-context learning and reasoning abilities of large auto-regressive language models such as LLaMA and DeepSeek. In contrast, encoders like BERT and RoBERTa have not seen the same level of progress despite being foundational for many downstream NLP applications. To bridge this gap, we introduce NeoBERT, a next-generation encoder that redefines the capabilities of bidirectional models by integrating state-of-the-art advancements in architecture, modern data, and optimized pre-training methodologies. NeoBERT is designed for seamless adoption: it serves as a plug-and-play replacement for existing base models, relies on an optimal depth-to-width ratio, and leverages an extended context length of 4,096 tokens. Despite its compact 250M parameter footprint, it achieves state-of-the-art results on the massive MTEB benchmark, outperforming BERT large, RoBERTa large, NomicBERT, and ModernBERT under identical fine-tuning conditions. In addition, we rigorously evaluate the impact of each modification on GLUE and design a uniform fine-tuning and evaluation framework for MTEB. We release all code, data, checkpoints, and training scripts to accelerate research and real-world adoption.
Problem

Research questions and friction points this paper is trying to address.

Enhance bidirectional encoder performance
Integrate modern architecture and data
Optimize pre-training for NLP applications
Innovation

Methods, ideas, or system contributions that make the work stand out.

Integrated advanced architecture enhancements
Optimized pre-training methodologies applied
Extended context length implementation
🔎 Similar Papers
No similar papers found.