🤖 AI Summary
This work proposes LACON, a novel framework that challenges the conventional “filter-then-train” paradigm in text-to-image generation by directly leveraging raw, uncurated datasets without discarding low-quality or unlabeled samples. Instead of relying solely on high-quality filtered data, LACON explicitly incorporates quantitative quality signals—such as aesthetic scores and watermark probabilities—as conditional labels during end-to-end diffusion model training. This approach enables the model to learn the full spectrum of data quality, from low to high, thereby harnessing otherwise wasted information. Under identical computational budgets, LACON significantly outperforms baseline models trained exclusively on filtered data, demonstrating the substantial value of effectively utilizing low-quality data in generative modeling.
📝 Abstract
The success of modern text-to-image generation is largely attributed to massive, high-quality datasets. Currently, these datasets are curated through a filter-first paradigm that aggressively discards low-quality raw data based on the assumption that it is detrimental to model performance. Is the discarded bad data truly useless, or does it hold untapped potential? In this work, we critically re-examine this question. We propose LACON (Labeling-and-Conditioning), a novel training framework that exploits the underlying uncurated data distribution. Instead of filtering, LACON re-purposes quality signals, such as aesthetic scores and watermark probabilities as explicit, quantitative condition labels. The generative model is then trained to learn the full spectrum of data quality, from bad to good. By learning the explicit boundary between high- and low-quality content, LACON achieves superior generation quality compared to baselines trained only on filtered data using the same compute budget, proving the significant value of uncurated data.