π€ AI Summary
Evaluating AI systemsβ interactive behaviors across diverse populations is hindered by the high cost and scarcity of real human data, particularly for capturing rare or hypothetical behaviors in long-tail scenarios. To address this, this work proposes a lightweight Persona Generator architecture that introduces the AlphaEvolve framework to synthetic persona generation for the first time. Leveraging large language models as mutation operators, the method iteratively refines minimal initial descriptions into rich, diverse synthetic personas encompassing a broad spectrum of viewpoints and preferences. Emphasizing coverage of behavioral diversity over fidelity to empirical distribution densities, and incorporating an axis-guided steering strategy, the approach significantly outperforms existing baselines across six diversity metrics, effectively generating rare behavioral combinations that standard LLMs struggle to reproduce.
π Abstract
Evaluating AI systems that interact with humans requires understanding their behavior across diverse user populations, but collecting representative human data is often expensive or infeasible, particularly for novel technologies or hypothetical future scenarios. Recent work in Generative Agent-Based Modeling has shown that large language models can simulate human-like synthetic personas with high fidelity, accurately reproducing the beliefs and behaviors of specific individuals. However, most approaches require detailed data about target populations and often prioritize density matching (replicating what is most probable) rather than support coverage (spanning what is possible), leaving long-tail behaviors underexplored. We introduce Persona Generators, functions that can produce diverse synthetic populations tailored to arbitrary contexts. We apply an iterative improvement loop based on AlphaEvolve, using large language models as mutation operators to refine our Persona Generator code over hundreds of iterations. The optimization process produces lightweight Persona Generators that can automatically expand small descriptions into populations of diverse synthetic personas that maximize coverage of opinions and preferences along relevant diversity axes. We demonstrate that evolved generators substantially outperform existing baselines across six diversity metrics on held-out contexts, producing populations that span rare trait combinations difficult to achieve in standard LLM outputs.