🤖 AI Summary
Text dataset distillation faces challenges including difficulty in modeling discrete token sequences, absence of standardized benchmarks, poor adaptability to complex NLP tasks, and unclear deployment pathways. This paper introduces the first end-to-end distillation framework specifically designed for discrete textual data, integrating Transformer architectures with decoder-only large language models (≥1B parameters) and proposing a knowledge-transfer-driven synthesis mechanism to efficiently compress large-scale text corpora into compact, high-quality synthetic datasets. Our method preserves downstream model performance across multiple NLP benchmarks—including classification and generation tasks—demonstrating strong generalization capability. Furthermore, we systematically analyze core technical challenges and emerging research paradigms in text distillation, establishing it as an independent subfield. The work also provides a reproducible baseline for standardized evaluation and practical deployment, paving the way toward industrial adoption of text distillation techniques.
📝 Abstract
In the vision domain, dataset distillation arises as a technique to condense a large dataset into a smaller synthetic one that exhibits a similar result in the training process. While image data presents an extensive literature of distillation methods, text dataset distillation has fewer works in comparison. Text dataset distillation initially grew as an adaptation of efforts from the vision universe, as the particularities of the modality became clear obstacles, it rose into a separate branch of research. Several milestones mark the development of this area, such as the introduction of methods that use transformer models, the generation of discrete synthetic text, and the scaling to decoder-only models with over 1B parameters. Despite major advances in modern approaches, the field remains in a maturing phase, with room for improvement on benchmarking standardization, approaches to overcome the discrete nature of text, handling complex tasks, and providing explicit examples of real-world applications. In this report, we review past and recent advances in dataset distillation for text, highlighting different distillation strategies, key contributions, and general challenges.