🤖 AI Summary
Language models deployed in sensitive domains such as healthcare risk memorizing and leaking direct or indirect identifiers, posing serious privacy threats. Method: This paper proposes a “privacy-first” language modeling paradigm, introducing the first privacy-by-design framework for LMs; systematically distinguishing and jointly mitigating memory of both direct and indirect identifiers; and developing provably anonymizing masked language modeling (MLM) and causal language modeling (CLM) variants tailored to BERT and GPT architectures, respectively—incorporating privacy-aware data filtering, optimized masking strategies, and a rigorous evaluation protocol. Results: Evaluated across multiple medical datasets, our approach significantly reduces identifier regeneration rates (p < 0.001) while incurring less than 2% degradation in downstream task performance, achieving the state-of-the-art privacy–utility trade-off.
📝 Abstract
Rapid advances in Natural Language Processing (NLP) have revolutionized many fields, including healthcare. However, these advances raise significant privacy concerns, especially when models specialized on sensitive data can memorize and then expose and regurgitate confidential information. This paper presents a privacy-by-design language modeling approach to address the problem of language models anonymization, and thus promote their sharing. Specifically, we propose both a Masking Language Modeling (MLM) methodology to specialize a BERT-like language model, and a Causal Language Modeling (CLM) methodology to specialize a GPT-like model that avoids the model from memorizing direct and indirect identifying information present in the training data. We have comprehensively evaluated our approaches using medical datasets and compared them against different baselines. Our results indicate that by avoiding memorizing both direct and indirect identifiers during model specialization, our masking and causal language modeling schemes offer the best tradeoff for maintaining high privacy while retaining high utility.