π€ AI Summary
Multilingual instruction fine-tuning (IFT) is hindered by the scarcity of high-quality, semantically diverse training data; existing approaches rely on English-centric heuristics with poor cross-lingual generalizability. To address this, we propose M-DaQβthe first language-agnostic, general-purpose data selection framework for multilingual IFT. M-DaQ systematically challenges and surpasses the surface alignment hypothesis (SAH) by jointly modeling multilingual embedding spaces, quantifying semantic diversity, and performing quality-aware clustering to ensure cross-lingually consistent data filtering. Experiments across 18 languages demonstrate that models trained on M-DaQ-curated datasets achieve an average win rate exceeding 60% in pairwise comparisons. Human evaluation further confirms significant improvements in cultural appropriateness and semantic richness of model responses. This work establishes a scalable, language-independent paradigm for constructing high-fidelity instruction data in multilingual settings.
π Abstract
Multilingual Instruction Fine-Tuning (IFT) is essential for enabling large language models (LLMs) to generalize effectively across diverse linguistic and cultural contexts. However, the scarcity of high-quality multilingual training data and corresponding building method remains a critical bottleneck. While data selection has shown promise in English settings, existing methods often fail to generalize across languages due to reliance on simplistic heuristics or language-specific assumptions. In this work, we introduce Multilingual Data Quality and Diversity (M-DaQ), a novel method for improving LLMs multilinguality, by selecting high-quality and semantically diverse multilingual IFT samples. We further conduct the first systematic investigation of the Superficial Alignment Hypothesis (SAH) in multilingual setting. Empirical results across 18 languages demonstrate that models fine-tuned with M-DaQ method achieve significant performance gains over vanilla baselines over 60% win rate. Human evaluations further validate these gains, highlighting the increment of cultural points in the response. We release the M-DaQ code to support future research.