Evaluating Language Models as Synthetic Data Generators

📅 2024-12-04
🏛️ Annual Meeting of the Association for Computational Linguistics
📈 Citations: 10
Influential: 0
📄 PDF
🤖 AI Summary
Existing research lacks systematic, cross-model evaluation of large language models (LLMs) as synthetic data generators. This paper introduces AgoraBench, the first benchmark to uniformly assess the capability of six state-of-the-art LLMs—GPT-4o, Claude-3.5-Sonnet, and others—to generate high-quality synthetic data. Leveraging 1.26 million generated samples, we train and evaluate 99 student models across question generation and augmentation tasks. Our contributions are fourfold: (1) establishing the first LLM benchmark explicitly designed for data generation capability; (2) demonstrating that generative proficiency is weakly correlated with problem-solving ability; (3) proposing a multidimensional quality metric suite—including response fidelity, perplexity, and instruction difficulty; and (4) revealing that GPT-4o excels at original question generation, while Claude-3.5-Sonnet outperforms in question augmentation, and further showing that output format optimization and lightweight model selection significantly enhance generation efficacy.

Technology Category

Application Category

📝 Abstract
Given the increasing use of synthetic data in language model (LM) post-training, an LM's ability to generate high-quality data has become nearly as crucial as its ability to solve problems directly. While prior works have focused on developing effective data generation methods, they lack systematic comparison of different LMs as data generators in a unified setting. To address this gap, we propose AgoraBench, a benchmark that provides standardized settings and metrics to evaluate LMs' data generation abilities. Through synthesizing 1.26 million training instances using 6 LMs and training 99 student models, we uncover key insights about LMs' data generation capabilities. First, we observe that LMs exhibit distinct strengths. For instance, GPT-4o excels at generating new problems, while Claude-3.5-Sonnet performs better at enhancing existing ones. Furthermore, our analysis reveals that an LM's data generation ability doesn't necessarily correlate with its problem-solving ability. Instead, multiple intrinsic features of data quality-including response quality, perplexity, and instruction difficulty-collectively serve as better indicators. Finally, we demonstrate that strategic choices in output format and cost-conscious model selection significantly impact data generation effectiveness.
Problem

Research questions and friction points this paper is trying to address.

Evaluating language models as synthetic data generators
Lack systematic comparison of different LMs' data generation abilities
Assessing correlation between data generation and problem-solving capabilities
Innovation

Methods, ideas, or system contributions that make the work stand out.

Benchmark for LM data generation evaluation
Intrinsic data quality features as indicators
Strategic output format and cost choices
🔎 Similar Papers
No similar papers found.