π€ AI Summary
This study addresses the challenge of balancing privacy compliance with preservation of research metadata in medical image de-identification. We introduce the first large-scale DICOM de-identification benchmark integrating real clinical images with synthetically generated sensitive information. Our method combines rule-based engines, OCR, large language models, and customized open-source and proprietary tools to automate the removal of protected health information (PHI) and personally identifiable information (PII) across multi-center, multi-modal radiological imaging data, while strictly adhering to HIPAA Safe Harbor, DICOM Privacy Profiles, and TCIA research metadata retention standards. The key contribution is the first unified evaluation framework that jointly ensures clinical realism and controllable synthesis of sensitive attributes. Evaluation across ten participating teams achieved de-identification accuracy ranging from 97.91% to 99.93%, demonstrating the platformβs effectiveness in enhancing AI training data utility without compromising regulatory compliance.
π Abstract
The de-identification (deID) of protected health information (PHI) and personally identifiable information (PII) is a fundamental requirement for sharing medical images, particularly through public repositories, to ensure compliance with patient privacy laws. In addition, preservation of non-PHI metadata to inform and enable downstream development of imaging artificial intelligence (AI) is an important consideration in biomedical research. The goal of MIDI-B was to provide a standardized platform for benchmarking of DICOM image deID tools based on a set of rules conformant to the HIPAA Safe Harbor regulation, the DICOM Attribute Confidentiality Profiles, and best practices in preservation of research-critical metadata, as defined by The Cancer Imaging Archive (TCIA). The challenge employed a large, diverse, multi-center, and multi-modality set of real de-identified radiology images with synthetic PHI/PII inserted.
The MIDI-B Challenge consisted of three phases: training, validation, and test. Eighty individuals registered for the challenge. In the training phase, we encouraged participants to tune their algorithms using their in-house or public data. The validation and test phases utilized the DICOM images containing synthetic identifiers (of 216 and 322 subjects, respectively). Ten teams successfully completed the test phase of the challenge. To measure success of a rule-based approach to image deID, scores were computed as the percentage of correct actions from the total number of required actions. The scores ranged from 97.91% to 99.93%. Participants employed a variety of open-source and proprietary tools with customized configurations, large language models, and optical character recognition (OCR). In this paper we provide a comprehensive report on the MIDI-B Challenge's design, implementation, results, and lessons learned.