Your Inference Request Will Become a Black Box: Confidential Inference for Cloud-based Large Language Models

📅 2026-02-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the critical privacy risks in cloud-hosted large language models, where user prompts and responses are often exposed during inference. To reconcile privacy, performance, and efficiency, the authors propose Talaria, a novel framework that partitions the inference pipeline into sensitive operations and weight computations. Sensitive operations execute within a client-side confidential virtual machine (CVM), while computationally intensive weight calculations are offloaded to cloud GPUs. Intermediate data is protected via a custom Reversible Masking for Outsourcing (ReMO) protocol. Talaria is the first to render user inputs and outputs completely invisible to the cloud provider while preserving model intellectual property, achieving lossless inference accuracy, and maintaining high efficiency and scalability. Experiments demonstrate its resilience against token inference attacks, reducing token reconstruction accuracy from 97.5% to 1.34%, with identical output quality to the original model and minimal overhead.

Technology Category

Application Category

📝 Abstract
The increasing reliance on cloud-hosted Large Language Models (LLMs) exposes sensitive client data, such as prompts and responses, to potential privacy breaches by service providers. Existing approaches fail to ensure privacy, maintain model performance, and preserve computational efficiency simultaneously. To address this challenge, we propose Talaria, a confidential inference framework that partitions the LLM pipeline to protect client data without compromising the cloud's model intellectual property or inference quality. Talaria executes sensitive, weight-independent operations within a client-controlled Confidential Virtual Machine (CVM) while offloading weight-dependent computations to the cloud GPUs. The interaction between these environments is secured by our Reversible Masked Outsourcing (ReMO) protocol, which uses a hybrid masking technique to reversibly obscure intermediate data before outsourcing computations. Extensive evaluations show that Talaria can defend against state-of-the-art token inference attacks, reducing token reconstruction accuracy from over 97.5% to an average of 1.34%, all while being a lossless mechanism that guarantees output identical to the original model without significantly decreasing efficiency and scalability. To the best of our knowledge, this is the first work that ensures clients' prompts and responses remain inaccessible to the cloud, while also preserving model privacy, performance, and efficiency.
Problem

Research questions and friction points this paper is trying to address.

confidential inference
large language models
privacy
cloud computing
data confidentiality
Innovation

Methods, ideas, or system contributions that make the work stand out.

Confidential Inference
Large Language Models
Reversible Masked Outsourcing
Confidential Virtual Machine
Privacy-Preserving AI
C
Chung-ju Huang
Key Lab of High Confidence Software Technologies (Peking University), Ministry of Education, China; School of Computer Science, Peking University, Beijing, China
H
Huiqiang Zhao
Tencent, Shenzhen, China
Y
Yuanpeng He
Key Lab of High Confidence Software Technologies (Peking University), Ministry of Education, China; School of Computer Science, Peking University, Beijing, China
Lijian Li
Lijian Li
Macau university
computer vision
W
Wenpin Jiao
Key Lab of High Confidence Software Technologies (Peking University), Ministry of Education, China; School of Computer Science, Peking University, Beijing, China
Zhi Jin
Zhi Jin
Sun Yat-Sen University, Associate Professor
P
Peixuan Chen
Tencent, Shenzhen, China
Leye Wang
Leye Wang
Tenured Associate Professor, Peking University
Ubiquitous ComputingUrban ComputingCrowdsensingFederated Learning