🤖 AI Summary
To address the scarcity of real-world labeled datasets for entity resolution in privacy-sensitive domains (e.g., healthcare), this paper proposes the first homomorphic encryption–based privacy-preserving blind labeling protocol. The protocol enables multiple domain experts to collaboratively produce high-quality labels without ever accessing each other’s plaintext data. To abstract cryptographic complexity, we design a lightweight, domain-specific labeling language (DSL). Our approach integrates homomorphic encryption with secure multi-party computation and is provably secure under strong privacy guarantees. Extensive simulations demonstrate that the generated labels achieve an average F1-score of 90.3%, closely matching the performance of plaintext-based labeling and significantly outperforming existing privacy-aware labeling methods.
📝 Abstract
The entity resolution problem requires finding pairs across datasets that belong to different owners but refer to the same entity in the real world. To train and evaluate solutions (either rule-based or machine-learning-based) to the entity resolution problem, generating a ground truth dataset with entity pairs or clusters is needed. However, such a data annotation process involves humans as domain oracles to review the plaintext data for all candidate record pairs from different parties, which inevitably infringes the privacy of data owners, especially in privacy-sensitive cases like medical records. To the best of our knowledge, there is no prior work on privacy-preserving ground truth dataset generation, especially in the domain of entity resolution. We propose a novel blind annotation protocol based on homomorphic encryption that allows domain oracles to collaboratively label ground truths without sharing data in plaintext with other parties. In addition, we design a domain-specific easy-to-use language that hides the sophisticated underlying homomorphic encryption layer. Rigorous proof of the privacy guarantee is provided and our empirical experiments via an annotation simulator indicate the feasibility of our privacy-preserving protocol (f-measure on average achieves more than 90% compared with the real ground truths).