🤖 AI Summary
This study challenges the dual efficacy of “de-identification” (i.e., facial occlusion) as a privacy-preserving strategy for publicly shared head MRI data. We propose a cascaded diffusion probabilistic model (DPM) to reconstruct high-fidelity faces from de-identified MRIs—first demonstrating feasibility and systematically evaluating associated privacy risks and scientific utility loss. Results show: (1) DPM effectively bypasses de-identification, enabling high-accuracy face re-identification and exposing significant privacy vulnerabilities; (2) de-identification severely impairs cross-modal prediction—original MRIs robustly predict CT-derived skeletal muscle density (tibial muscle: *p* < 0.05), whereas de-identified counterparts fail entirely (*p* > 0.05); (3) DPM-reconstructed faces exhibit significantly lower surface distance to ground-truth faces than population-average templates (*p* < 0.05) and generalize well to unseen datasets. Collectively, our findings indicate that facial de-identification is neither sufficient for privacy protection nor benign for biomarker discovery—undermining its methodological justification in neuroimaging research.
📝 Abstract
Defacing is often applied to head magnetic resonance image (MRI) datasets prior to public release to address privacy concerns. The alteration of facial and nearby voxels has provoked discussions about the true capability of these techniques to ensure privacy as well as their impact on downstream tasks. With advancements in deep generative models, the extent to which defacing can protect privacy is uncertain. Additionally, while the altered voxels are known to contain valuable anatomical information, their potential to support research beyond the anatomical regions directly affected by defacing remains uncertain. To evaluate these considerations, we develop a refacing pipeline that recovers faces in defaced head MRIs using cascaded diffusion probabilistic models (DPMs). The DPMs are trained on images from 180 subjects and tested on images from 484 unseen subjects, 469 of whom are from a different dataset. To assess whether the altered voxels in defacing contain universally useful information, we also predict computed tomography (CT)-derived skeletal muscle radiodensity from facial voxels in both defaced and original MRIs. The results show that DPMs can generate high-fidelity faces that resemble the original faces from defaced images, with surface distances to the original faces significantly smaller than those of a population average face (p<0.05). This performance also generalizes well to previously unseen datasets. For skeletal muscle radiodensity predictions, using defaced images results in significantly weaker Spearman's rank correlation coefficients compared to using original images (p<10-4). For shin muscle, the correlation is statistically significant (p<0.05) when using original images but not statistically significant (p>0.05) when any defacing method is applied, suggesting that defacing might not only fail to protect privacy but also eliminate valuable information.