SoK: Semantic Privacy in Large Language Models

📅 2025-06-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses semantic privacy risks in large language models (LLMs)—specifically, the leakage of implicit, context-dependent, or inferable information in sensitive scenarios. It systematically analyzes root causes across the LLM lifecycle: input processing, pretraining, fine-tuning, and alignment. We propose the first holistic semantic privacy risk analysis framework for LLMs, exposing fundamental limitations of existing defenses against contextual inference and latent representation leakage. Through empirical evaluation, we integrate differential privacy, embedding encryption, edge computing, and machine unlearning to assess semantic-level protection efficacy. Our work establishes the first comprehensive research framework for LLM semantic privacy, identifying key open challenges—including multimodal privacy preservation, de-identification, and the trade-off between privacy and generation quality. The study provides both theoretical foundations and practical guidelines for designing semantics-aware privacy mechanisms. (149 words)

Technology Category

Application Category

📝 Abstract
As Large Language Models (LLMs) are increasingly deployed in sensitive domains, traditional data privacy measures prove inadequate for protecting information that is implicit, contextual, or inferable - what we define as semantic privacy. This Systematization of Knowledge (SoK) introduces a lifecycle-centric framework to analyze how semantic privacy risks emerge across input processing, pretraining, fine-tuning, and alignment stages of LLMs. We categorize key attack vectors and assess how current defenses, such as differential privacy, embedding encryption, edge computing, and unlearning, address these threats. Our analysis reveals critical gaps in semantic-level protection, especially against contextual inference and latent representation leakage. We conclude by outlining open challenges, including quantifying semantic leakage, protecting multimodal inputs, balancing de-identification with generation quality, and ensuring transparency in privacy enforcement. This work aims to inform future research on designing robust, semantically aware privacy-preserving techniques for LLMs.
Problem

Research questions and friction points this paper is trying to address.

Analyzing semantic privacy risks in LLM lifecycle stages
Assessing current defenses against contextual inference attacks
Identifying gaps in protecting multimodal and latent data
Innovation

Methods, ideas, or system contributions that make the work stand out.

Lifecycle-centric framework for semantic privacy
Assessing defenses like differential privacy and encryption
Addressing gaps in contextual inference protection
🔎 Similar Papers
No similar papers found.
Baihe Ma
Baihe Ma
University of Technology Sydney
cyber security and privacyVANETAI privacyAI trustLLM
Y
Yanna Jiang
University of Technology Sydney, Global Big Data Technologies Centre, Sydney, Australia
X
Xu Wang
University of Technology Sydney, Global Big Data Technologies Centre, Sydney, Australia
G
Guangshen Yu
University of Technology Sydney, Global Big Data Technologies Centre, Sydney, Australia
Qin Wang
Qin Wang
ETH Zurich
Domain AdaptationComputer Vision
C
Caijun Sun
Zhejiang Lab, Hangzhou, China
C
Chen Li
University of Technology Sydney, Global Big Data Technologies Centre, Sydney, Australia
X
Xuelei Qi
University of Technology Sydney, Global Big Data Technologies Centre, Sydney, Australia
Y
Ying He
University of Technology Sydney, Global Big Data Technologies Centre, Sydney, Australia
Wei Ni
Wei Ni
FIEEE, AAIA Fellow, Senior Principal Scientist & Conjoint Professor, CSIRO/UNSW
6G security and privacyconnected and trusted intelligenceapplied AI/ML
Ren Ping Liu
Ren Ping Liu
University of Technology Sydney
Wireless NetworkingNetwork SecurityBlockchain