🤖 AI Summary
This work addresses the critical privacy risks in cloud-hosted large language models, where user prompts and responses are often exposed during inference. To reconcile privacy, performance, and efficiency, the authors propose Talaria, a novel framework that partitions the inference pipeline into sensitive operations and weight computations. Sensitive operations execute within a client-side confidential virtual machine (CVM), while computationally intensive weight calculations are offloaded to cloud GPUs. Intermediate data is protected via a custom Reversible Masking for Outsourcing (ReMO) protocol. Talaria is the first to render user inputs and outputs completely invisible to the cloud provider while preserving model intellectual property, achieving lossless inference accuracy, and maintaining high efficiency and scalability. Experiments demonstrate its resilience against token inference attacks, reducing token reconstruction accuracy from 97.5% to 1.34%, with identical output quality to the original model and minimal overhead.
📝 Abstract
The increasing reliance on cloud-hosted Large Language Models (LLMs) exposes sensitive client data, such as prompts and responses, to potential privacy breaches by service providers. Existing approaches fail to ensure privacy, maintain model performance, and preserve computational efficiency simultaneously. To address this challenge, we propose Talaria, a confidential inference framework that partitions the LLM pipeline to protect client data without compromising the cloud's model intellectual property or inference quality. Talaria executes sensitive, weight-independent operations within a client-controlled Confidential Virtual Machine (CVM) while offloading weight-dependent computations to the cloud GPUs. The interaction between these environments is secured by our Reversible Masked Outsourcing (ReMO) protocol, which uses a hybrid masking technique to reversibly obscure intermediate data before outsourcing computations. Extensive evaluations show that Talaria can defend against state-of-the-art token inference attacks, reducing token reconstruction accuracy from over 97.5% to an average of 1.34%, all while being a lossless mechanism that guarantees output identical to the original model without significantly decreasing efficiency and scalability. To the best of our knowledge, this is the first work that ensures clients' prompts and responses remain inaccessible to the cloud, while also preserving model privacy, performance, and efficiency.