🤖 AI Summary
Traditional social surveys suffer from rigidity, high costs, and poor cross-cultural equivalence, while existing LLM research predominantly focuses on structured questionnaires, neglecting end-to-end modeling and exacerbating underrepresentation of marginalized groups due to data bias. To address these gaps, we propose the first full-pipeline social survey benchmark spanning four stages: role modeling, semi-structured interviewing, attitude/stance inference, and response generation. We introduce a multi-level evaluation framework that jointly optimizes diversity and fairness. Leveraging open-source LLMs, we perform two-stage fine-tuning—integrating expert annotations with nationally representative survey data—to develop the SurveyLM model family. We release a multilevel dataset comprising over 44,000 dialogues and 400,000 questionnaire records, alongside fully open-sourced code, models, and tooling. Our approach significantly improves LLMs’ fidelity, alignment, and representational fairness in social survey tasks.
📝 Abstract
Understanding human attitudes, preferences, and behaviors through social surveys is essential for academic research and policymaking. Yet traditional surveys face persistent challenges, including fixed-question formats, high costs, limited adaptability, and difficulties ensuring cross-cultural equivalence. While recent studies explore large language models (LLMs) to simulate survey responses, most are limited to structured questions, overlook the entire survey process, and risks under-representing marginalized groups due to training data biases. We introduce AlignSurvey, the first benchmark that systematically replicates and evaluates the full social survey pipeline using LLMs. It defines four tasks aligned with key survey stages: social role modeling, semi-structured interview modeling, attitude stance modeling and survey response modeling. It also provides task-specific evaluation metrics to assess alignment fidelity, consistency, and fairness at both individual and group levels, with a focus on demographic diversity. To support AlignSurvey, we construct a multi-tiered dataset architecture: (i) the Social Foundation Corpus, a cross-national resource with 44K+ interview dialogues and 400K+ structured survey records; and (ii) a suite of Entire-Pipeline Survey Datasets, including the expert-annotated AlignSurvey-Expert (ASE) and two nationally representative surveys for cross-cultural evaluation. We release the SurveyLM family, obtained through two-stage fine-tuning of open-source LLMs, and offer reference models for evaluating domain-specific alignment. All datasets, models, and tools are available at github and huggingface to support transparent and socially responsible research.