🤖 AI Summary
To address the dual security and performance challenges posed by prompt attacks—such as privacy leakage, resource exhaustion, and service degradation—in edge-cloud collaborative large language model (EC-LLM) systems, this paper proposes a lightweight defense framework that jointly optimizes security detection and inference latency. We introduce the first multi-stage dynamic Bayesian game model to enable online attack intent prediction and real-time belief updating. Additionally, we design an efficient prompt attack detector leveraging a vector database, achieving low-overhead, high-accuracy detection. The framework further integrates Bayesian belief updating with edge-cloud cooperative inference scheduling. Evaluated on a real-world EC-LLM system, our approach achieves a significant improvement in attack detection rate, reduces end-to-end latency for benign users by 18.7%, and decreases system resource consumption by 23.4%.
📝 Abstract
Large language models (LLMs) have significantly facilitated human life, and prompt engineering has improved the efficiency of these models. However, recent years have witnessed a rise in prompt engineering-empowered attacks, leading to issues such as privacy leaks, increased latency, and system resource wastage. Though safety fine-tuning based methods with Reinforcement Learning from Human Feedback (RLHF) are proposed to align the LLMs, existing security mechanisms fail to cope with fickle prompt attacks, highlighting the necessity of performing security detection on prompts. In this paper, we jointly consider prompt security, service latency, and system resource optimization in Edge-Cloud LLM (EC-LLM) systems under various prompt attacks. To enhance prompt security, a vector-database-enabled lightweight attack detector is proposed. We formalize the problem of joint prompt detection, latency, and resource optimization into a multi-stage dynamic Bayesian game model. The equilibrium strategy is determined by predicting the number of malicious tasks and updating beliefs at each stage through Bayesian updates. The proposed scheme is evaluated on a real implemented EC-LLM system, and the results demonstrate that our approach offers enhanced security, reduces the service latency for benign users, and decreases system resource consumption compared to state-of-the-art algorithms.