๐ค AI Summary
To address the limited long-context modeling capability of Finnish and related languages under multilingual resource constraints, this paper pretrains six ModernBERT encoder models of varying scales. Methodologically, we extend the BERT architecture and perform self-supervised pretraining on large-scale Finnish and closely related language corpora, with explicit optimization for sequences exceeding 512 tokens; we further conduct a systematic analysis of end-stage data curation strategies. Our contributions are threefold: (1) the first dedicated, scalable multilingual BERT family specifically designed for the Finno-Ugric language family; (2) consistent and significant performance gains over general-purpose multilingual models (e.g., mBERT, XLM-R) and the monolingual FinBERT on long-text NLP tasksโincluding question answering and document classification; and (3) full open-sourcing of all models and training code to facilitate research and applications in low-resource, long-context NLP.
๐ Abstract
This paper reports on pretraining ModernBERT encoder models in six different sizes, ranging from 51M to 475M parameters, with a focus on limited multilingualism, emphasizing languages relevant to Finland. Our models are competitive with, or superior to, existing multilingual models. They outperform monolingual models on tasks that require a context longer than 512 tokens. We present empirical results on using different data in the final stage of training. The code and models are publicly released.