Enhancing Clinical Models with Pseudo Data for De-identification

πŸ“… 2025-06-15
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Clinical foundation models suffer from representation degradation and suboptimal downstream de-identification performance due to coarse-grained β€œredaction” of protected health information (PHI) in training data. Method: We propose a semantic-consistent pseudo-PHI generation approach that replaces redacted spans with realistic, contextually appropriate synthetic PHI to construct privacy-preserving pretraining corpora. We conduct privacy-safe pretraining using an encoder-only architecture, followed by supervised fine-tuning and rigorous clinical text evaluation. Contribution/Results: We are the first to systematically characterize the detrimental impact of redaction on linguistic representations; introduce a controllable, generalizable pseudo-PHI generation strategy; and publicly release our generation toolkit, pre-trained and fine-tuned models, and a high-quality pseudo-PHI dataset. Experiments demonstrate significant improvements over state-of-the-art baselines on PHI de-identification tasks, delivering a fully reproducible, end-to-end solution.

Technology Category

Application Category

πŸ“ Abstract
Many models are pretrained on redacted text for privacy reasons. Clinical foundation models are often trained on de-identified text, which uses special syntax (masked) text in place of protected health information. Even though these models have increased in popularity, there has been little effort in understanding the effects of training them on redacted text. In this work, we pretrain several encoder-only models on a dataset that contains redacted text and a version with replaced realistic pseudo text. We then fine-tuned models for the protected health information de-identification task and show how our methods significantly outperform previous baselines. The contributions of this work include: a) our novel, and yet surprising findings with training recommendations, b) redacted text replacements used to produce the pseudo dataset, c) pretrained embeddings and fine-tuned task specific models, and d) freely available pseudo training dataset generation and model source code used in our experiments.
Problem

Research questions and friction points this paper is trying to address.

Effects of training models on redacted clinical text
Improving de-identification with realistic pseudo data
Performance comparison of models on PHI de-identification
Innovation

Methods, ideas, or system contributions that make the work stand out.

Pretrain models with redacted and pseudo text
Fine-tune models for health information de-identification
Generate and share pseudo dataset and source code
πŸ”Ž Similar Papers
No similar papers found.
P
Paul Landes
Department of Computer Science, University of Illinois Chicago
A
Aaron J Chaise
Department of Emergency Medicine, University of Illinois Chicago
Tarak Nath Nandi
Tarak Nath Nandi
Assistant Computational Scientist, Argonne National Laboratory
GenomicsCancer BiologyArtificial IntelligenceCFD/TurbulenceMaterials Science
R
Ravi K Madduri
Advanced Privacy Preserving Federated Learning, Argonne National Labs