🤖 AI Summary
Existing LLM-based synthetic data generation methods suffer from insufficient diversity. To address this, we propose a training-free, closed-model-compatible iterative diversity optimization framework. Our approach explicitly models semantic similarity among data instances using Determinantal Point Processes (DPPs), and integrates zero-shot prompting with iterative reweighted sampling—enabling controllable diversity enhancement without model fine-tuning. The framework is theoretically grounded, interpretable, and highly scalable. Extensive experiments across multi-task synthetic data generation scenarios demonstrate that our method improves dataset diversity by 1.5–3× over state-of-the-art baselines, leading to substantial gains in downstream task generalization performance.
📝 Abstract
Large language models (LLMs) are increasingly being used to generate synthetic datasets for the evaluation and training of downstream models. However, prior work has noted that such generated data lacks diversity. In this paper, we propose Voyager, a novel principled approach to generate diverse datasets. Our approach is iterative and directly optimizes a mathematical quantity that optimizes the diversity of the dataset using the machinery of determinantal point processes. Furthermore, our approach is training-free, applicable to closed-source models, and scalable. In addition to providing theoretical justification for the working of our method, we also demonstrate through comprehensive experiments that Voyager significantly outperforms popular baseline approaches by providing a 1.5-3x improvement in diversity.