Covert Prompt Transmission for Secure Large Language Model Services

📅 2025-04-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Deploying large language models (LLMs) over wireless networks faces a fundamental trade-off among security, transmission stealthiness, and ultra-low latency. Method: This paper proposes a surprisal-guided, semantic-aware prompt compression framework jointly integrated with lightweight permutation encryption, optimized via a grouped proximal policy optimization (GPPO) algorithm incorporating KL-divergence constraints to enhance RL policy stability and exploration efficiency. The approach enables end-to-end, locally executed small-language-model-driven semantic-fidelity compression, undetectable encryption, and joint optimization of wireless resources (transmit power and compression ratio). Contributions/Results: Preprocessing latency is reduced by over five orders of magnitude, enabling real-time edge deployment. High response fidelity is preserved on mainstream 32B-class models (e.g., DeepSeek-32B, Qwen-32B). End-to-end latency for covert transmission decreases by 38.6%, significantly outperforming baseline methods.

Technology Category

Application Category

📝 Abstract
This paper investigates covert prompt transmission for secure and efficient large language model (LLM) services over wireless networks. We formulate a latency minimization problem under fidelity and detectability constraints to ensure confidential and covert communication by jointly optimizing the transmit power and prompt compression ratio. To solve this problem, we first propose a prompt compression and encryption (PCAE) framework, performing surprisal-guided compression followed by lightweight permutation-based encryption. Specifically, PCAE employs a locally deployed small language model (SLM) to estimate token-level surprisal scores, selectively retaining semantically critical tokens while discarding redundant ones. This significantly reduces computational overhead and transmission duration. To further enhance covert wireless transmission, we then develop a group-based proximal policy optimization (GPPO) method that samples multiple candidate actions for each state, selecting the optimal one within each group and incorporating a Kullback-Leibler (KL) divergence penalty to improve policy stability and exploration. Simulation results show that PCAE achieves comparable LLM response fidelity to baseline methods while reducing preprocessing latency by over five orders of magnitude, enabling real-time edge deployment. We further validate PCAE effectiveness across diverse LLM backbones, including DeepSeek-32B, Qwen-32B, and their smaller variants. Moreover, GPPO reduces covert transmission latency by up to 38.6% compared to existing reinforcement learning strategies, with further analysis showing that increased transmit power provides additional latency benefits.
Problem

Research questions and friction points this paper is trying to address.

Minimize latency in covert prompt transmission for secure LLM services
Optimize power and compression ratio for confidential wireless communication
Enhance transmission efficiency while maintaining response fidelity and detectability
Innovation

Methods, ideas, or system contributions that make the work stand out.

PCAE framework compresses and encrypts prompts efficiently
GPPO method optimizes covert wireless transmission latency
Surprisal-guided compression retains critical semantic tokens
🔎 Similar Papers
No similar papers found.