🤖 AI Summary
This work addresses the lack of systematic evaluation of quality and trustworthiness in synthetic data generated by large language models (LLMs), a gap exacerbated by existing studies’ narrow focus on single modalities and downstream task performance. To bridge this gap, the paper introduces the LLM Data Auditor framework, which establishes the first cross-modal intrinsic evaluation system encompassing six modalities—including text and images—and defines direct metrics along the dual dimensions of data quality and trustworthiness. Through a comprehensive literature review and systematic categorization of evaluation metrics, the study identifies critical shortcomings in current assessment practices and proposes concrete pathways for improvement, thereby offering a methodological foundation for the reliable deployment of multimodal synthetic data.
📝 Abstract
Large Language Models (LLMs) have emerged as powerful tools for generating data across various modalities. By transforming data from a scarce resource into a controllable asset, LLMs mitigate the bottlenecks imposed by the acquisition costs of real-world data for model training, evaluation, and system iteration. However, ensuring the high quality of LLM-generated synthetic data remains a critical challenge. Existing research primarily focuses on generation methodologies, with limited direct attention to the quality of the resulting data. Furthermore, most studies are restricted to single modalities, lacking a unified perspective across different data types. To bridge this gap, we propose the \textbf{LLM Data Auditor framework}. In this framework, we first describe how LLMs are utilized to generate data across six distinct modalities. More importantly, we systematically categorize intrinsic metrics for evaluating synthetic data from two dimensions: quality and trustworthiness. This approach shifts the focus from extrinsic evaluation, which relies on downstream task performance, to the inherent properties of the data itself. Using this evaluation system, we analyze the experimental evaluations of representative generation methods for each modality and identify substantial deficiencies in current evaluation practices. Based on these findings, we offer concrete recommendations for the community to improve the evaluation of data generation. Finally, the framework outlines methodologies for the practical application of synthetic data across different modalities.