🤖 AI Summary
This paper addresses semantic privacy risks in large language models (LLMs)—specifically, the leakage of implicit, context-dependent, or inferable information in sensitive scenarios. It systematically analyzes root causes across the LLM lifecycle: input processing, pretraining, fine-tuning, and alignment. We propose the first holistic semantic privacy risk analysis framework for LLMs, exposing fundamental limitations of existing defenses against contextual inference and latent representation leakage. Through empirical evaluation, we integrate differential privacy, embedding encryption, edge computing, and machine unlearning to assess semantic-level protection efficacy. Our work establishes the first comprehensive research framework for LLM semantic privacy, identifying key open challenges—including multimodal privacy preservation, de-identification, and the trade-off between privacy and generation quality. The study provides both theoretical foundations and practical guidelines for designing semantics-aware privacy mechanisms. (149 words)
📝 Abstract
As Large Language Models (LLMs) are increasingly deployed in sensitive domains, traditional data privacy measures prove inadequate for protecting information that is implicit, contextual, or inferable - what we define as semantic privacy. This Systematization of Knowledge (SoK) introduces a lifecycle-centric framework to analyze how semantic privacy risks emerge across input processing, pretraining, fine-tuning, and alignment stages of LLMs. We categorize key attack vectors and assess how current defenses, such as differential privacy, embedding encryption, edge computing, and unlearning, address these threats. Our analysis reveals critical gaps in semantic-level protection, especially against contextual inference and latent representation leakage. We conclude by outlining open challenges, including quantifying semantic leakage, protecting multimodal inputs, balancing de-identification with generation quality, and ensuring transparency in privacy enforcement. This work aims to inform future research on designing robust, semantically aware privacy-preserving techniques for LLMs.