Medical Image De-Identification Benchmark Challenge

πŸ“… 2025-07-31
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This study addresses the challenge of balancing privacy compliance with preservation of research metadata in medical image de-identification. We introduce the first large-scale DICOM de-identification benchmark integrating real clinical images with synthetically generated sensitive information. Our method combines rule-based engines, OCR, large language models, and customized open-source and proprietary tools to automate the removal of protected health information (PHI) and personally identifiable information (PII) across multi-center, multi-modal radiological imaging data, while strictly adhering to HIPAA Safe Harbor, DICOM Privacy Profiles, and TCIA research metadata retention standards. The key contribution is the first unified evaluation framework that jointly ensures clinical realism and controllable synthesis of sensitive attributes. Evaluation across ten participating teams achieved de-identification accuracy ranging from 97.91% to 99.93%, demonstrating the platform’s effectiveness in enhancing AI training data utility without compromising regulatory compliance.

Technology Category

Application Category

πŸ“ Abstract
The de-identification (deID) of protected health information (PHI) and personally identifiable information (PII) is a fundamental requirement for sharing medical images, particularly through public repositories, to ensure compliance with patient privacy laws. In addition, preservation of non-PHI metadata to inform and enable downstream development of imaging artificial intelligence (AI) is an important consideration in biomedical research. The goal of MIDI-B was to provide a standardized platform for benchmarking of DICOM image deID tools based on a set of rules conformant to the HIPAA Safe Harbor regulation, the DICOM Attribute Confidentiality Profiles, and best practices in preservation of research-critical metadata, as defined by The Cancer Imaging Archive (TCIA). The challenge employed a large, diverse, multi-center, and multi-modality set of real de-identified radiology images with synthetic PHI/PII inserted. The MIDI-B Challenge consisted of three phases: training, validation, and test. Eighty individuals registered for the challenge. In the training phase, we encouraged participants to tune their algorithms using their in-house or public data. The validation and test phases utilized the DICOM images containing synthetic identifiers (of 216 and 322 subjects, respectively). Ten teams successfully completed the test phase of the challenge. To measure success of a rule-based approach to image deID, scores were computed as the percentage of correct actions from the total number of required actions. The scores ranged from 97.91% to 99.93%. Participants employed a variety of open-source and proprietary tools with customized configurations, large language models, and optical character recognition (OCR). In this paper we provide a comprehensive report on the MIDI-B Challenge's design, implementation, results, and lessons learned.
Problem

Research questions and friction points this paper is trying to address.

Standardize benchmarking of DICOM image de-identification tools
Ensure compliance with HIPAA and preserve research-critical metadata
Evaluate rule-based deID accuracy using synthetic PHI/PII data
Innovation

Methods, ideas, or system contributions that make the work stand out.

Standardized DICOM deID benchmarking platform
Synthetic PHI/PII in real radiology images
Combined OCR and large language models
πŸ”Ž Similar Papers
No similar papers found.
L
Linmin Pei
Frederick National Laboratory for Cancer Research, Frederick, MD, 21702, USA
G
Granger Sutton
National Cancer Institute, National Institute of Health (NIH), Bethesda, MD 20892, USA
M
Michael Rutherford
University of Arkansas for Medical Sciences, Little Rock, Arkansas, USA
U
Ulrike Wagner
Frederick National Laboratory for Cancer Research, Frederick, MD, 21702, USA
T
Tracy Nolan
University of Arkansas for Medical Sciences, Little Rock, Arkansas, USA
K
Kirk Smith
University of Arkansas for Medical Sciences, Little Rock, Arkansas, USA
P
Phillip Farmer
University of Arkansas for Medical Sciences, Little Rock, Arkansas, USA
P
Peter Gu
Essential Software Inc. 9711 Washingtonian Blvd, Suite 550, Gaithersburg, MD, 20878
A
Ambar Rana
Essential Software Inc. 9711 Washingtonian Blvd, Suite 550, Gaithersburg, MD, 20878
K
Kailing Chen
Essential Software Inc. 9711 Washingtonian Blvd, Suite 550, Gaithersburg, MD, 20878
T
Thomas Ferleman
Essential Software Inc. 9711 Washingtonian Blvd, Suite 550, Gaithersburg, MD, 20878
B
Brian Park
Essential Software Inc. 9711 Washingtonian Blvd, Suite 550, Gaithersburg, MD, 20878
Y
Ye Wu
Essential Software Inc. 9711 Washingtonian Blvd, Suite 550, Gaithersburg, MD, 20878
J
Jordan Kojouharov
Amazon Web Services (AWS)
G
Gargi Singh
Amazon Web Services (AWS)
J
Jon Lemon
Amazon Web Services (AWS)
T
Tyler Willis
Amazon Web Services (AWS)
M
Milos Vukadinovic
University of California, Los Angeles, CA, USA; Cedars-Sinai Medical Center, Los Angeles, CA, USA
G
Grant Duffy
Cedars-Sinai Medical Center, Los Angeles, CA, USA
Bryan He
Bryan He
Stanford University
Machine learningOptimization
David Ouyang
David Ouyang
Cardiology, Kaiser Permanente
Computer VisionMachine LearningEchocardiographyCardiologyData Science
M
Marco Pereanez
Biomedical Engineering & Imaging Institute, Icahn School of Medicine at Mount Sinai, New York NY 10029, USA
D
Daniel Samber
Biomedical Engineering & Imaging Institute, Icahn School of Medicine at Mount Sinai, New York NY 10029, USA
D
Derek A. Smith
Biomedical Engineering & Imaging Institute, Icahn School of Medicine at Mount Sinai, New York NY 10029, USA
C
Christopher Cannistraci
Biomedical Engineering & Imaging Institute, Icahn School of Medicine at Mount Sinai, New York NY 10029, USA