🤖 AI Summary
To address the scarcity of pretraining data for low-resource Indian languages, this paper introduces BhashaKritika, a multilingual synthetic data construction framework covering 10 Indian languages and 54 billion tokens. Methodologically, it proposes the first document–role–topic co-guided synthetic generation paradigm, integrating five complementary generation techniques; it further designs a modular quality assurance pipeline incorporating script/language identification, metadata consistency verification, n-gram deduplication, and KenLM perplexity filtering—enabling efficient cross-script and cross-lingual quality control. Comprehensive experiments systematically characterize the quality–diversity trade-offs across generation strategies, establishing best practices for multilingual synthetic corpus construction. Empirical results demonstrate that models pretrained on BhashaKritika achieve substantial performance gains across Indian languages, providing a reusable data infrastructure and methodological blueprint for low-resource multilingual LLM development.
📝 Abstract
In the context of pretraining of Large Language Models (LLMs), synthetic data has emerged as an alternative for generating high-quality pretraining data at scale. This is particularly beneficial in low-resource language settings where the benefits of recent LLMs have been unevenly distributed across languages. In this work, we present a systematic study on the generation and evaluation of synthetic multilingual pretraining data for Indic languages, where we construct a large-scale synthetic dataset BhashaKritika, comprising 540B tokens using 5 different techniques for 10 languages. We explore the impact of grounding generation in documents, personas, and topics. We analyze how language choice, both in the prompt instructions and document grounding, affects data quality, and we compare translations of English content with native generation in Indic languages. To support scalable and language-sensitive evaluation, we introduce a modular quality evaluation pipeline that integrates script and language detection, metadata consistency checks, n-gram repetition analysis, and perplexity-based filtering using KenLM models. Our framework enables robust quality control across diverse scripts and linguistic contexts. Empirical results through model runs reveal key trade-offs in generation strategies and highlight best practices for constructing effective multilingual corpora.