A Calibrated Memorization Index (MI) for Detecting Training Data Leakage in Generative MRI Models

📅 2026-02-13
📈 Citations: 0
Influential: 0
📄 PDF

Technology Category

Application Category

📝 Abstract
Image generative models are known to duplicate images from the training data as part of their outputs, which can lead to privacy concerns when used for medical image generation. We propose a calibrated per-sample metric for detecting memorization and duplication of training data. Our metric uses image features extracted using an MRI foundation model, aggregates multi-layer whitened nearest-neighbor similarities, and maps them to a bounded \emph{Overfit/Novelty Index} (ONI) and \emph{Memorization Index} (MI) scores. Across three MRI datasets with controlled duplication percentages and typical image augmentations, our metric robustly detects duplication and provides more consistent metric values across datasets. At the sample level, our metric achieves near-perfect detection of duplicates.
Problem

Research questions and friction points this paper is trying to address.

training data leakage
memorization
generative MRI models
privacy
data duplication
Innovation

Methods, ideas, or system contributions that make the work stand out.

Memorization Index
Training Data Leakage
Generative MRI Models
Nearest-Neighbor Similarity
Calibrated Metric
🔎 Similar Papers
No similar papers found.