🤖 AI Summary
This work systematically uncovers an inherent privacy vulnerability in data curation pipelines: even when the final model is trained exclusively on publicly available data, the process of using private data to guide dataset selection can inadvertently leak membership information about the private data. To demonstrate this risk, we propose a novel membership inference attack targeting three distinct stages of the curation pipeline—score computation, subset selection, and final model training—and validate its effectiveness across mainstream data selection algorithms. Furthermore, we design a tailored defense mechanism integrating differential privacy into the curation process. Empirical evaluations show that our approach significantly mitigates information leakage at each stage while preserving utility.
📝 Abstract
In machine learning, curation is used to select the most valuable data for improving both model accuracy and computational efficiency. Recently, curation has also been explored as a solution for private machine learning: rather than training directly on sensitive data, which is known to leak information through model predictions, the private data is used only to guide the selection of useful public data. The resulting model is then trained solely on curated public data. It is tempting to assume that such a model is privacy-preserving because it has never seen the private data. Yet, we show that without further protection, curation pipelines can still leak private information. Specifically, we introduce novel attacks against popular curation methods, targeting every major step: the computation of curation scores, the selection of the curated subset, and the final trained model. We demonstrate that each stage reveals information about the private dataset and that even models trained exclusively on curated public data leak membership information about the private data that guided curation. These findings highlight the previously overlooked inherent privacy risks of data curation and show that privacy assessment must extend beyond the training procedure to include the data selection process. Our differentially private adaptations of curation methods effectively mitigate leakage, indicating that formal privacy guarantees for curation are a promising direction.