Persona Generators: Generating Diverse Synthetic Personas at Scale

πŸ“… 2026-02-03
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Evaluating AI systems’ interactive behaviors across diverse populations is hindered by the high cost and scarcity of real human data, particularly for capturing rare or hypothetical behaviors in long-tail scenarios. To address this, this work proposes a lightweight Persona Generator architecture that introduces the AlphaEvolve framework to synthetic persona generation for the first time. Leveraging large language models as mutation operators, the method iteratively refines minimal initial descriptions into rich, diverse synthetic personas encompassing a broad spectrum of viewpoints and preferences. Emphasizing coverage of behavioral diversity over fidelity to empirical distribution densities, and incorporating an axis-guided steering strategy, the approach significantly outperforms existing baselines across six diversity metrics, effectively generating rare behavioral combinations that standard LLMs struggle to reproduce.

Technology Category

Application Category

πŸ“ Abstract
Evaluating AI systems that interact with humans requires understanding their behavior across diverse user populations, but collecting representative human data is often expensive or infeasible, particularly for novel technologies or hypothetical future scenarios. Recent work in Generative Agent-Based Modeling has shown that large language models can simulate human-like synthetic personas with high fidelity, accurately reproducing the beliefs and behaviors of specific individuals. However, most approaches require detailed data about target populations and often prioritize density matching (replicating what is most probable) rather than support coverage (spanning what is possible), leaving long-tail behaviors underexplored. We introduce Persona Generators, functions that can produce diverse synthetic populations tailored to arbitrary contexts. We apply an iterative improvement loop based on AlphaEvolve, using large language models as mutation operators to refine our Persona Generator code over hundreds of iterations. The optimization process produces lightweight Persona Generators that can automatically expand small descriptions into populations of diverse synthetic personas that maximize coverage of opinions and preferences along relevant diversity axes. We demonstrate that evolved generators substantially outperform existing baselines across six diversity metrics on held-out contexts, producing populations that span rare trait combinations difficult to achieve in standard LLM outputs.
Problem

Research questions and friction points this paper is trying to address.

synthetic personas
diversity coverage
long-tail behaviors
human-AI interaction evaluation
generative agent-based modeling
Innovation

Methods, ideas, or system contributions that make the work stand out.

Persona Generators
Generative Agent-Based Modeling
AlphaEvolve
Diversity Coverage
Synthetic Personas
πŸ”Ž Similar Papers
Davide Paglieri
Davide Paglieri
University College London
Artificial IntelligenceReinforcement LearningDeep LearningOpen-Endedness
L
Logan Cross
Google DeepMind
W
William A. Cunningham
Google DeepMind
J
Joel Z. Leibo
Google DeepMind
A
A. Vezhnevets
Google DeepMind