🤖 AI Summary
This study addresses the structural misalignment between qualitative and quantitative data in mixed-methods research by leveraging large language models (LLMs) to generate psychometrically reliable synthetic survey responses from interview transcripts. Methodologically, we employed the Behavioral Regulation in Exercise Questionnaire (BREQ) as the measurement framework, integrating content from in-depth interviews with after-school program staff. Using structured prompt engineering and low-temperature sampling, we systematically evaluated—across Claude and GPT models—how interview-derived contextual cues enhance response quality. Key contributions include: (1) the first empirical demonstration that interview guidance significantly improves both response diversity and fidelity to individual response patterns; (2) evidence that prompt design and temperature parameters exert stronger influence on psychometric alignment than demographic variables; and (3) confirmation that while LLMs reliably reproduce aggregate distributions, they initially underrepresent response variability—a limitation effectively mitigated by interview-informed conditioning.
📝 Abstract
Mixed methods research integrates quantitative and qualitative data but faces challenges in aligning their distinct structures, particularly in examining measurement characteristics and individual response patterns. Advances in large language models (LLMs) offer promising solutions by generating synthetic survey responses informed by qualitative data. This study investigates whether LLMs, guided by personal interviews, can reliably predict human survey responses, using the Behavioral Regulations in Exercise Questionnaire (BREQ) and interviews from after-school program staff as a case study. Results indicate that LLMs capture overall response patterns but exhibit lower variability than humans. Incorporating interview data improves response diversity for some models (e.g., Claude, GPT), while well-crafted prompts and low-temperature settings enhance alignment between LLM and human responses. Demographic information had less impact than interview content on alignment accuracy. These findings underscore the potential of interview-informed LLMs to bridge qualitative and quantitative methodologies while revealing limitations in response variability, emotional interpretation, and psychometric fidelity. Future research should refine prompt design, explore bias mitigation, and optimize model settings to enhance the validity of LLM-generated survey data in social science research.