Improving Romanian LLM Pretraining Data using Diversity and Quality Filtering

📅 2025-11-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Pretraining data for Romanian is scarce, heterogeneous in quality, and insufficiently diverse in topical coverage. Method: This paper proposes a lightweight multi-task filtering framework that jointly evaluates educational value, performs topic modeling, and analyzes syntactic and formatting diversity to conduct hierarchical quality screening of LLM-generated annotated texts. Unlike conventional unidimensional data cleaning, the framework systematically characterizes cross-lingual disparities between Romanian and English pretraining corpora across topic distribution, pedagogical relevance, and structural diversity. Contribution/Results: Empirical evaluation demonstrates that models trained on the filtered Romanian corpus achieve substantial performance gains on downstream tasks—including ROBUST and RONEC—validating the efficacy of structured, domain-aware data curation for low-resource language modeling. The work establishes a principled paradigm for small-language pretraining data construction, highlighting its critical role in enhancing model capabilities.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) have recently exploded in popularity, often matching or outperforming human abilities on many tasks. One of the key factors in training LLMs is the availability and curation of high-quality data. Data quality is especially crucial for under-represented languages, where high-quality corpora are scarce. In this work we study the characteristics and coverage of Romanian pretraining corpora and we examine how they differ from English data. By training a lightweight multitask model on carefully LLM-annotated Romanian texts, we are able to analyze and perform multi-level filtering (e.g., educational value, topic, format) to generate high-quality pretraining datasets. Our experiments show noteworthy trends in the topics present in Romanian and English data, while also proving the effectiveness of filtering data through improved LLM pretraining performance across multiple benchmarks.
Problem

Research questions and friction points this paper is trying to address.

Analyzing Romanian pretraining corpora characteristics and English comparisons
Developing multi-level filtering methods for high-quality Romanian datasets
Improving Romanian LLM performance through diversity and quality filtering
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multitask model analyzes Romanian text characteristics
Performs multi-level filtering for dataset quality
Uses LLM-annotated texts to improve pretraining performance
🔎 Similar Papers
No similar papers found.
V
Vlad Negoita
National University of Science and Technology POLITEHNICA Bucharest, 313 Splaiul Independentei, 060042, Bucharest, Romania
M
Mihai Masala
National University of Science and Technology POLITEHNICA Bucharest, 313 Splaiul Independentei, 060042, Bucharest, Romania
Traian Rebedea
Traian Rebedea
NVIDIA & Assoc Prof @ University Politehnica of Bucharest
Artificial IntelligenceNatural Language ProcessingMachine LearningHuman-Computer Interaction