Data Processing for the OpenGPT-X Model Family

📅 2024-10-11
🏛️ arXiv.org
📈 Citations: 2
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses data quality, multilingual coverage, and regulatory compliance challenges in training large language models (LLMs) for the OpenGPT-X initiative. Methodologically, it introduces a novel “dual-track” data processing paradigm: lightweight filtering for curated datasets and aggressive filtering combined with MinHash/LSH-based deduplication for large-scale web corpora—fully aligned with EU regulations such as the GDPR. The pipeline integrates fastText-based language identification, hybrid rule-and-statistics filtering, a learned quality scoring model, and end-to-end metadata provenance tracking. Its primary contribution is the construction of the first high-quality, EU-compliant multilingual corpus for LLM training, explicitly designed for public-sector applications. Empirical evaluation demonstrates substantial improvements in model robustness, transparency, and auditability—particularly in government and public service use cases—while ensuring legal and ethical adherence across 24 official EU languages.

Technology Category

Application Category

📝 Abstract
This paper presents a comprehensive overview of the data preparation pipeline developed for the OpenGPT-X project, a large-scale initiative aimed at creating open and high-performance multilingual large language models (LLMs). The project goal is to deliver models that cover all major European languages, with a particular focus on real-world applications within the European Union. We explain all data processing steps, starting with the data selection and requirement definition to the preparation of the final datasets for model training. We distinguish between curated data and web data, as each of these categories is handled by distinct pipelines, with curated data undergoing minimal filtering and web data requiring extensive filtering and deduplication. This distinction guided the development of specialized algorithmic solutions for both pipelines. In addition to describing the processing methodologies, we provide an in-depth analysis of the datasets, increasing transparency and alignment with European data regulations. Finally, we share key insights and challenges faced during the project, offering recommendations for future endeavors in large-scale multilingual data preparation for LLMs.
Problem

Research questions and friction points this paper is trying to address.

Develop data pipeline for multilingual OpenGPT-X LLMs
Handle curated and web data with distinct processing methods
Ensure compliance with European data regulations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Specialized pipelines for curated and web data
Extensive filtering and deduplication for web data
Compliance with European data regulations
🔎 Similar Papers
No similar papers found.
N
Nicolo’ Brandizzi
Fraunhofer IAIS
H
Hammam Abdelwahab
Fraunhofer IAIS
A
Anirban Bhowmick
Fraunhofer IAIS
L
Lennard Helmer
Fraunhofer IAIS
B
Benny Jorg Stein
Fraunhofer IAIS
Pavel Denisov
Pavel Denisov
Fraunhofer IAIS
Q
Qasid Saleem
Fraunhofer IAIS
Michael Fromm
Michael Fromm
Fraunhofer IAIS
Machine LearningLarge Language ModelsArgument Mining
Mehdi Ali
Mehdi Ali
Fraunhofer IAIS, LAMARR Institute
Machine LearningKnowledge GraphsRelational LearningNLPFoundation Models
R
Richard Rutmann
Fraunhofer IAIS
F
Farzad Naderi
Fraunhofer IIS
M
Mohamad Saif Agy
Fraunhofer IIS
A
Alexander Schwirjow
Fraunhofer IIS
F
Fabian Kuch
Fraunhofer IIS
L
Luzian Hahn
Fraunhofer IIS
Malte Ostendorff
Malte Ostendorff
University of Göttingen / German Research Center for Artificial Intelligence
Large language modelsRecommender systemsInformation retrieval
Pedro Ortiz Suarez
Pedro Ortiz Suarez
Principal Research Scientist, Common Crawl Foundation
Language modelingCorpus linguisticsNamed Entity RecognitionComputational LinguisticsMachine
Georg Rehm
Georg Rehm
Principal Researcher and Research Fellow, DFKI GmbH
Natural Language ProcessingArtificial IntelligenceLanguage TechnologyComputational LinguisticsSemantic Web
D
Dennis Wegener
Fraunhofer IAIS
N
Nicolas Flores-Herr
Fraunhofer IAIS
J
Joachim Kohler
Fraunhofer IAIS
Johannes Leveling
Johannes Leveling
Fraunhofer IAIS