Measuring Diversity in Synthetic Datasets

📅 2025-02-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenge of accurately quantifying diversity in synthetic text datasets generated by large language models (LLMs), this paper proposes DCScore—a novel diversity evaluation method based on a mutual classification paradigm. Unlike conventional distance- or entropy-based metrics, DCScore formalizes diversity as cross-class discriminability among samples, theoretically satisfying key diversity axioms: monotonicity, symmetry, and additivity—ensuring both principled grounding and computational efficiency. The method integrates information-theoretic similarity measures with axiom-driven modeling. Empirical evaluation on multiple diversity pseudo-ground-truth benchmarks demonstrates that DCScore achieves significantly higher correlation with human judgments than state-of-the-art baselines (average improvement of +12.7%), while reducing computational overhead by an order of magnitude. The implementation is publicly available.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) are widely adopted to generate synthetic datasets for various natural language processing (NLP) tasks, such as text classification and summarization. However, accurately measuring the diversity of these synthetic datasets-an aspect crucial for robust model performance-remains a significant challenge. In this paper, we introduce DCScore, a novel method for measuring synthetic dataset diversity from a classification perspective. Specifically, DCScore formulates diversity evaluation as a sample classification task, leveraging mutual relationships among samples. We further provide theoretical verification of the diversity-related axioms satisfied by DCScore, highlighting its role as a principled diversity evaluation method. Experimental results on synthetic datasets reveal that DCScore enjoys a stronger correlation with multiple diversity pseudo-truths of evaluated datasets, underscoring its effectiveness. Moreover, both empirical and theoretical evidence demonstrate that DCScore substantially reduces computational costs compared to existing approaches. Code is available at: https://github.com/BlueWhaleLab/DCScore.
Problem

Research questions and friction points this paper is trying to address.

Measure diversity in synthetic datasets.
Introduce DCScore for diversity evaluation.
Reduce computational costs in diversity measurement.
Innovation

Methods, ideas, or system contributions that make the work stand out.

DCScore measures synthetic dataset diversity
Leverages sample classification for diversity evaluation
Reduces computational costs significantly
🔎 Similar Papers
No similar papers found.