🤖 AI Summary
Existing synthetic data generation methods for epidemiological studies suffer from limitations in data fidelity, computational efficiency, and practical usability. Method: We propose an adversarial random forest (ARF)-based framework for tabular synthetic data generation, integrating dimensionality reduction, pre-derived variable construction, and multi-cohort harmonization to substantially reduce computational overhead and improve deployability. Contribution/Results: Evaluated across six real-world epidemiological studies, the synthetic data achieve high statistical fidelity—matching original data in descriptive statistics and inferential analyses (e.g., effect estimates, confidence intervals, significance testing) with mean absolute error <5%. Robust performance persists even under small-sample conditions. This work is the first to directly apply ARF to epidemiological data synthesis and to conduct end-to-end evaluation of statistical utility, establishing a new paradigm for privacy-preserving, reproducible epidemiological research.
📝 Abstract
Generative artificial intelligence for synthetic data generation holds substantial potential to address practical challenges in epidemiology. However, many current methods suffer from limited quality, high computational demands, and complexity for non-experts. Furthermore, common evaluation strategies for synthetic data often fail to directly reflect statistical utility. Against this background, a critical underexplored question is whether synthetic data can reliably reproduce key findings from epidemiological research. We propose the use of adversarial random forests (ARF) as an efficient and convenient method for synthesizing tabular epidemiological data. To evaluate its performance, we replicated statistical analyses from six epidemiological publications and compared original with synthetic results. These publications cover blood pressure, anthropometry, myocardial infarction, accelerometry, loneliness, and diabetes, based on data from the German National Cohort (NAKO Gesundheitsstudie), the Bremen STEMI Registry U45 Study, and the Guelph Family Health Study. Additionally, we assessed the impact of dimensionality and variable complexity on synthesis quality by limiting datasets to variables relevant for individual analyses, including necessary derivations. Across all replicated original studies, results from multiple synthetic data replications consistently aligned with original findings. Even for datasets with relatively low sample size-to-dimensionality ratios, the replication outcomes closely matched the original results across various descriptive and inferential analyses. Reducing dimensionality and pre-deriving variables further enhanced both quality and stability of the results.