π€ AI Summary
High-quality synthetic data for LLM evaluation remains scarce, and quantitatively controlling its difficulty and diversity is challenging. Method: This paper proposes Adversarial Swarmsβa framework that employs Particle Swarm Optimization (PSO) to cooperatively orchestrate multiple heterogeneous data generators, enabling multi-objective joint optimization (e.g., challenge level, diversity, fidelity). It further introduces a novel dual-evolutionary adversarial mechanism between generators and the target evaluation model, dynamically enhancing the evaluative power of generated data. Contribution/Results: Adversarial Swarms outperforms eight baselines across five core evaluation metrics, significantly improving the robustness and generalization of evaluated LLMs. Its cross-model generalizability is empirically validated on multiple unseen commercial LLMs.
π Abstract
We propose Data Swarms, an algorithm to optimize the generation of synthetic evaluation data and advance quantitative desiderata of LLM evaluation. We first train a swarm of initial data generators using existing data, and define various evaluation objectives to reflect the desired properties of evaluation (e.g., generate more difficult problems for the evaluated models) and quantitatively evaluate data generators. We then employ particle swarm optimization to optimize the swarm of data generators, where they collaboratively search through the model parameter space to find new generators that advance these objectives. We further extend it to Adversarial Swarms, where the data generator swarm generates harder data while the test taker model swarm learns from such data, co-evolving dynamically for better data and models simultaneously. Extensive experiments demonstrate that Data Swarms outperforms eight data generation baselines across five evaluation objectives, while Adversarial Swarms produce more robust learning of synthetic data and stronger generalization. Further analysis reveals that Data Swarms successfully optimizes compositions of multiple evaluation objectives and generalizes to new off-the-shelf LLMs, unseen at optimization time.