Leveraging Interview-Informed LLMs to Model Survey Responses: Comparative Insights from AI-Generated and Human Data

📅 2025-05-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the structural misalignment between qualitative and quantitative data in mixed-methods research by leveraging large language models (LLMs) to generate psychometrically reliable synthetic survey responses from interview transcripts. Methodologically, we employed the Behavioral Regulation in Exercise Questionnaire (BREQ) as the measurement framework, integrating content from in-depth interviews with after-school program staff. Using structured prompt engineering and low-temperature sampling, we systematically evaluated—across Claude and GPT models—how interview-derived contextual cues enhance response quality. Key contributions include: (1) the first empirical demonstration that interview guidance significantly improves both response diversity and fidelity to individual response patterns; (2) evidence that prompt design and temperature parameters exert stronger influence on psychometric alignment than demographic variables; and (3) confirmation that while LLMs reliably reproduce aggregate distributions, they initially underrepresent response variability—a limitation effectively mitigated by interview-informed conditioning.

Technology Category

Application Category

📝 Abstract
Mixed methods research integrates quantitative and qualitative data but faces challenges in aligning their distinct structures, particularly in examining measurement characteristics and individual response patterns. Advances in large language models (LLMs) offer promising solutions by generating synthetic survey responses informed by qualitative data. This study investigates whether LLMs, guided by personal interviews, can reliably predict human survey responses, using the Behavioral Regulations in Exercise Questionnaire (BREQ) and interviews from after-school program staff as a case study. Results indicate that LLMs capture overall response patterns but exhibit lower variability than humans. Incorporating interview data improves response diversity for some models (e.g., Claude, GPT), while well-crafted prompts and low-temperature settings enhance alignment between LLM and human responses. Demographic information had less impact than interview content on alignment accuracy. These findings underscore the potential of interview-informed LLMs to bridge qualitative and quantitative methodologies while revealing limitations in response variability, emotional interpretation, and psychometric fidelity. Future research should refine prompt design, explore bias mitigation, and optimize model settings to enhance the validity of LLM-generated survey data in social science research.
Problem

Research questions and friction points this paper is trying to address.

Aligning qualitative and quantitative data structures in mixed methods research
Predicting human survey responses using interview-informed LLMs
Improving LLM-generated survey data validity and response diversity
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLMs generate synthetic survey responses
Interview data enhances response diversity
Low-temperature settings improve response alignment
J
Jihong Zhang
Department of Counseling, Leadership, and Research Methods, University of Arkansas
X
Xinya Liang
Department of Counseling, Leadership, and Research Methods, University of Arkansas
A
Anqi Deng
Department of Health, Human Performance, & Recreation, University of Arkansas
N
Nicole Bonge
Department of Counseling, Leadership, and Research Methods, University of Arkansas
Lin Tan
Lin Tan
Mary J. Elmore New Frontiers Professor, Computer Science, Purdue University
LLM4CodeSoftware reliabilityAIText analyticsAutoformalization
Ling Zhang
Ling Zhang
Alibaba DAMO Academy USA
Medical Image AnalysisMedical Image ComputingMachine LearningImage Processing
N
Nicole Zarrett
Department of Psychology, University of South Carolina