🤖 AI Summary
To address the challenge of accurately quantifying diversity in synthetic text datasets generated by large language models (LLMs), this paper proposes DCScore—a novel diversity evaluation method based on a mutual classification paradigm. Unlike conventional distance- or entropy-based metrics, DCScore formalizes diversity as cross-class discriminability among samples, theoretically satisfying key diversity axioms: monotonicity, symmetry, and additivity—ensuring both principled grounding and computational efficiency. The method integrates information-theoretic similarity measures with axiom-driven modeling. Empirical evaluation on multiple diversity pseudo-ground-truth benchmarks demonstrates that DCScore achieves significantly higher correlation with human judgments than state-of-the-art baselines (average improvement of +12.7%), while reducing computational overhead by an order of magnitude. The implementation is publicly available.
📝 Abstract
Large language models (LLMs) are widely adopted to generate synthetic datasets for various natural language processing (NLP) tasks, such as text classification and summarization. However, accurately measuring the diversity of these synthetic datasets-an aspect crucial for robust model performance-remains a significant challenge. In this paper, we introduce DCScore, a novel method for measuring synthetic dataset diversity from a classification perspective. Specifically, DCScore formulates diversity evaluation as a sample classification task, leveraging mutual relationships among samples. We further provide theoretical verification of the diversity-related axioms satisfied by DCScore, highlighting its role as a principled diversity evaluation method. Experimental results on synthetic datasets reveal that DCScore enjoys a stronger correlation with multiple diversity pseudo-truths of evaluated datasets, underscoring its effectiveness. Moreover, both empirical and theoretical evidence demonstrate that DCScore substantially reduces computational costs compared to existing approaches. Code is available at: https://github.com/BlueWhaleLab/DCScore.