🤖 AI Summary
Existing multilingual instruction tuning approaches overlook the intrinsic linguistic structure during data selection. To address this, we propose LangGPS, the first framework to adopt *language separability*—the capacity of model representations to distinguish language identity—as the core criterion for data selection. LangGPS employs a lightweight two-stage pre-selection mechanism: (1) filtering high-separability samples based on language separability scores; and (2) refining the subset by integrating established selection methods, while supporting multilingual curriculum learning. Experiments across six benchmarks and 22 languages demonstrate that LangGPS significantly improves multilingual understanding performance—particularly for low-resource languages—and consistently enhances the generalizability and convergence speed of diverse data selection methods. Moreover, it establishes a novel paradigm for evaluating multilingual data utility grounded in representational language separability.
📝 Abstract
Joint multilingual instruction tuning is a widely adopted approach to improve the multilingual instruction-following ability and downstream performance of large language models (LLMs), but the resulting multilingual capability remains highly sensitive to the composition and selection of the training data. Existing selection methods, often based on features like text quality, diversity, or task relevance, typically overlook the intrinsic linguistic structure of multilingual data. In this paper, we propose LangGPS, a lightweight two-stage pre-selection framework guided by language separability which quantifies how well samples in different languages can be distinguished in the model's representation space. LangGPS first filters training data based on separability scores and then refines the subset using existing selection methods. Extensive experiments across six benchmarks and 22 languages demonstrate that applying LangGPS on top of existing selection methods improves their effectiveness and generalizability in multilingual training, especially for understanding tasks and low-resource languages. Further analysis reveals that highly separable samples facilitate the formation of clearer language boundaries and support faster adaptation, while low-separability samples tend to function as bridges for cross-lingual alignment. Besides, we also find that language separability can serve as an effective signal for multilingual curriculum learning, where interleaving samples with diverse separability levels yields stable and generalizable gains. Together, we hope our work offers a new perspective on data utility in multilingual contexts and support the development of more linguistically informed LLMs.