Towards Privacy-Preserving Machine Translation at the Inference Stage: A New Task and Benchmark

๐Ÿ“… 2026-03-15
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This study addresses the privacy risks in online machine translation services, where named entities in usersโ€™ sensitive input texts can be inadvertently exposed during inferenceโ€”a scenario lacking formal task definition, evaluation benchmarks, and mitigation methods in prior work. To bridge this gap, we formally introduce the Privacy-Preserving Machine Translation (PPMT) task, specifically targeting named entity protection at inference time. We construct three dedicated test sets, propose tailored evaluation metrics, and develop a baseline approach based on named entity recognition and replacement that effectively masks sensitive information while preserving translation quality. This work establishes a comprehensive framework for PPMT, fills a critical void in the field, and provides foundational resources for future research.

Technology Category

Application Category

๐Ÿ“ Abstract
Current online translation services require sending user text to cloud servers, posing a risk of privacy leakage when the text contains sensitive information. This risk hinders the application of online translation services in privacy-sensitive scenarios. One way to mitigate this risk for online translation services is introducing privacy protection mechanisms targeting the inference stage of translation models. However, compared to subfields of NLP like text classification and summarization, the machine translation research community has limited exploration of privacy protection during the inference stage. There is no clearly defined privacy protection task for the inference stage, dedicated evaluation datasets and metrics, and reference benchmark methods. The absence of these elements has seriously constrained researchers' in-depth exploration of this direction. To bridge this gap, this paper proposes a novel "Privacy-Preserving Machine Translation" (PPMT) task, aiming to protect the private information in text during the model inference stage. For this task, we constructed three benchmark test datasets, designed corresponding evaluation metrics, and proposed a series of benchmark methods as a starting point for this task. The definition of privacy is complex and diverse. Considering that named entities often contain a large amount of personal privacy and commercial secrets, we have focused our research on protecting only the named entity's privacy in the text. We expect this research work will provide a new perspective and a solid foundation for the privacy protection problem in machine translation.
Problem

Research questions and friction points this paper is trying to address.

Privacy-Preserving Machine Translation
inference-stage privacy
named entity privacy
machine translation
privacy leakage
Innovation

Methods, ideas, or system contributions that make the work stand out.

Privacy-Preserving Machine Translation
Inference-stage Privacy
Named Entity Protection
Benchmark Dataset
Machine Translation
๐Ÿ”Ž Similar Papers
No similar papers found.