RobPI: Robust Private Inference against Malicious Client

📅 2026-02-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work proposes RobPI, the first robust private inference protocol designed to defend against malicious clients who attempt to manipulate model outputs—a vulnerability unaddressed by existing protocols that assume semi-honest adversaries. RobPI integrates noise injection into the encrypted, inference-compatible computation to simultaneously protect both logits and intermediate feature layers. The approach achieves strong security guarantees with minimal impact on normal inference performance. Empirical evaluations across diverse neural network architectures and datasets demonstrate that RobPI substantially raises the cost of adversarial attacks: on average, attack success rates drop by 91.9%, while the number of required queries increases by more than an order of magnitude.

Technology Category

Application Category

📝 Abstract
The increased deployment of machine learning inference in various applications has sparked privacy concerns. In response, private inference (PI) protocols have been created to allow parties to perform inference without revealing their sensitive data. Despite recent advances in the efficiency of PI, most current methods assume a semi-honest threat model where the data owner is honest and adheres to the protocol. However, in reality, data owners can have different motivations and act in unpredictable ways, making this assumption unrealistic. To demonstrate how a malicious client can compromise the semi-honest model, we first designed an inference manipulation attack against a range of state-of-the-art private inference protocols. This attack allows a malicious client to modify the model output with 3x to 8x fewer queries than current black-box attacks. Motivated by the attacks, we proposed and implemented RobPI, a robust and resilient private inference protocol that withstands malicious clients. RobPI integrates a distinctive cryptographic protocol that bolsters security by weaving encryption-compatible noise into the logits and features of private inference, thereby efficiently warding off malicious-client attacks. Our extensive experiments on various neural networks and datasets show that RobPI achieves ~91.9% attack success rate reduction and increases more than 10x the number of queries required by malicious-client attacks.
Problem

Research questions and friction points this paper is trying to address.

Private Inference
Malicious Client
Threat Model
Inference Manipulation
Privacy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Private Inference
Malicious Client
Robustness
Cryptographic Protocol
Inference Manipulation Attack
🔎 Similar Papers
No similar papers found.