🤖 AI Summary
This work addresses the challenge of reliably simulating authentic human behavior in large language model (LLM)-driven digital twins, which often suffer from systematic biases and insufficient calibration. The authors propose a lightweight, model-agnostic post-processing calibration framework that introduces synthetic control methods—originally developed in causal inference—into the digital twin domain for the first time. Grounded in a latent factor model, the framework establishes a theoretical condition for aligning latent spaces, enabling accurate individual- and distribution-level simulation of unseen queries and unobserved populations. Compatible with any LLM and supporting end-to-end calibration, the method achieves up to a 50% improvement in individual-level behavioral correlation and reduces distributional discrepancies by 50%–90%.
📝 Abstract
AI-based persona simulation -- often referred to as digital twin simulation -- is increasingly used for market research, recommender systems, and social sciences. Despite their flexibility, large language models (LLMs) often exhibit systematic bias and miscalibration relative to real human behavior, limiting their reliability. Inspired by synthetic control methods from causal inference, we propose SYN-DIGITS (SYNthetic Control Framework for Calibrated DIGItal Twin Simulation), a principled and lightweight calibration framework that learns latent structure from digital-twin responses and transfers it to align predictions with human ground truth. SYN-DIGITS operates as a post-processing layer on top of any LLM-based simulator and thus is model-agnostic. We develop a latent factor model that formalizes when and why calibration succeeds through latent space alignment conditions, and we systematically evaluate ten calibration methods across thirteen persona constructions, three LLMs, and two datasets. SYN-DIGITS supports both individual-level and distributional simulation for previously unseen questions and unobserved populations, with provable error guarantees. Experiments show that SYN-DIGITS achieves up to 50% relative improvements in individual-level correlation and 50--90% relative reductions in distributional discrepancy compared to uncalibrated baselines.