Olmix: A Framework for Data Mixing Throughout LM Development

๐Ÿ“… 2026-02-12
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work addresses the challenge of efficiently determining data mixing ratios in language model training under dynamically evolving domain sets, where existing approaches suffer from low efficiency and poor practicality. To this end, we propose Olmix, a novel framework that systematically analyzes the data mixing configuration space, identifies key design choices, and introduces a โ€œmixing reuseโ€ mechanism to enable efficient incremental recomputation of mixing ratios based on historical configurations when the domain set changes. Experimental results across five rounds of real-world domain shifts demonstrate that Olmix achieves performance comparable to full recomputation while reducing computational overhead by 74%, and yields an average improvement of 11.6% on downstream tasks.

Technology Category

Application Category

๐Ÿ“ Abstract
Data mixing -- determining the ratios of data from different domains -- is a first-order concern for training language models (LMs). While existing mixing methods show promise, they fall short when applied during real-world LM development. We present Olmix, a framework that addresses two such challenges. First, the configuration space for developing a mixing method is not well understood -- design choices across existing methods lack justification or consensus and overlook practical issues like data constraints. We conduct a comprehensive empirical study of this space, identifying which design choices lead to a strong mixing method. Second, in practice, the domain set evolves throughout LM development as datasets are added, removed, partitioned, and revised -- a problem setting largely unaddressed by existing works, which assume fixed domains. We study how to efficiently recompute the mixture after the domain set is updated, leveraging information from past mixtures. We introduce mixture reuse, a mechanism that reuses existing ratios and recomputes ratios only for domains affected by the update. Over a sequence of five domain-set updates mirroring real-world LM development, mixture reuse matches the performance of fully recomputing the mix after each update with 74% less compute and improves over training without mixing by 11.6% on downstream tasks.
Problem

Research questions and friction points this paper is trying to address.

data mixing
language model development
domain evolution
configuration space
dynamic datasets
Innovation

Methods, ideas, or system contributions that make the work stand out.

data mixing
language model development
mixture reuse
domain evolution
empirical study
๐Ÿ”Ž Similar Papers
No similar papers found.