🤖 AI Summary
This study investigates the impact of data quality versus quantity on continual pretraining for low-resource language adaptation, focusing on Portuguese. Using LLaMA-2-7B as the base model, we compare continual pretraining on the full ClassiCC-PT corpus (100B tokens) against a high-quality, domain-curated subset (10B tokens) comprising education and STEM texts. Results demonstrate that strategic domain-specific data curation substantially outperforms scale-only expansion: the 10B-token subset—requiring only 10% of the data volume and ~20% of computational resources—achieves superior performance across multiple Portuguese downstream tasks in education and STEM domains. This work provides the first empirical validation of the “data quality–driven” paradigm for efficient low-resource language adaptation. It establishes a reproducible, cost-effective methodology for domain-specific large language model adaptation under resource constraints, offering a principled alternative to brute-force data scaling.
📝 Abstract
Continued pretraining extends a language model's capabilities by further exposing it to additional data, often tailored to a specific linguistic or domain context. This strategy has emerged as an efficient alternative to full retraining when adapting general-purpose models to new settings. In this work, we investigate this paradigm through Curió 7B, a 7-billion-parameter model derived from LLaMA-2 and trained on 100 billion Portuguese tokens from the ClassiCC-PT corpus - the most extensive Portuguese-specific continued-pretraining effort above the three-billion-parameter scale to date. Beyond scale, we investigate whether quantity alone suffices or whether data quality plays a decisive role in linguistic adaptation. To this end, we introduce Curió-Edu 7B, a variant trained exclusively on the educational and STEM-filtered subset of the same corpus, totaling just 10 billion tokens. Despite using only 10% of the data and 20% of the computation, Curió-Edu 7B surpasses the full-corpus model in our evaluations, demonstrating that data selection can be fundamental even when adapting models with limited prior exposure to the target language. The developed models are available at https://huggingface.co/collections/ClassiCC-Corpus/curio-edu