RoSE: Round-robin Synthetic Data Evaluation for Selecting LLM Generators without Human Test Sets

📅 2025-10-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In low-resource language settings where human-annotated test sets are unavailable, how can one unsupervisedly select the optimal large language model (LLM) as a synthetic data generator? Method: We propose RoSE, a novel framework featuring a cyclic cross-evaluation mechanism: multiple lightweight models are evaluated on data generated by each candidate LLM, and their average downstream task performance serves as an unsupervised proxy metric—strongly correlated with true task performance. Contribution/Results: Evaluated across 6 LLMs, 11 languages, and 3 task categories, RoSE significantly outperforms conventional intrinsic metrics (e.g., perplexity) and achieves high alignment with human-annotated test set performance—the only method attaining statistical significance in correlation. It lags behind the oracle best-generator baseline by merely 0.76 percentage points. RoSE establishes the first generalizable, annotation-free framework for LLM-based synthetic data generator selection.

Technology Category

Application Category

📝 Abstract
LLMs are powerful generators of synthetic data, which are used for training smaller, specific models. This is especially valuable for low-resource languages, where human-labelled data is scarce but LLMs can still produce high-quality text. However, LLMs differ in how useful their outputs are for training. Selecting the best LLM as a generator is challenging because extrinsic evaluation requires costly human annotations (which are often unavailable for low-resource languages), while intrinsic metrics correlate poorly with downstream performance. We introduce Round robin Synthetic data Evaluation (RoSE), a proxy metric for selecting the best LLM generator without human test sets. RoSE trains a small model on the outputs of a candidate generator (LLM) and then evaluates it on generated synthetic examples from all other candidate LLMs. The final RoSE score is the mean performance of this small model. Across six LLMs, eleven languages, and three tasks (sentiment, topic, intent), RoSE identifies the optimal generator more often than any other intrinsic heuristics. RoSE outperforms intrinsic heuristics and comes within 0.76 percentage points of the optimal generator baseline. This result is measured in terms of downstream performance, obtained by training a small model on the chosen generator's outputs (optimal vs. proxy metric selected) and evaluating it on human-labelled test data. Additionally, RoSE is the only metric to achieve a positive correlation with performance on human test data.
Problem

Research questions and friction points this paper is trying to address.

Selecting optimal LLM generators without human test sets
Evaluating synthetic data quality for low-resource languages
Replacing costly human annotations with proxy metrics
Innovation

Methods, ideas, or system contributions that make the work stand out.

Round-robin evaluation method for LLM selection
Proxy metric eliminates need for human annotations
Trains small models on cross-LLM synthetic data
🔎 Similar Papers
No similar papers found.