UFO: Unlocking Ultra-Efficient Quantized Private Inference with Protocol and Algorithm Co-Optimization

📅 2026-02-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Private CNN inference under secure two-party computation (2PC) faces dual challenges of efficiency and accuracy, primarily due to the high communication overhead of convolutional layers and accuracy degradation caused by combining quantization with the Winograd algorithm. This work proposes a co-optimized framework that jointly refines protocol and algorithm design by integrating Winograd convolution with quantization-aware training (QAT). The approach introduces graph-level communication optimization, layer-sensitivity-driven mixed-precision QAT, and a 2PC-friendly bit reweighting mechanism that avoids bit-width expansion. Compared to state-of-the-art methods such as SiRNN, COINN, and CoPriv, the proposed solution reduces communication costs by 11.7×, 3.6×, and 6.3×, respectively, while simultaneously improving model accuracy by 1.29%, 1.16%, and 1.29%.

Technology Category

Application Category

📝 Abstract
Private convolutional neural network (CNN) inference based on secure two-party computation (2PC) suffers from high communication and latency overhead, especially from convolution layers. In this paper, we propose UFO, a quantized 2PC inference framework that jointly optimizes the 2PC protocols and quantization algorithm. UFO features a novel 2PC protocol that systematically combines the efficient Winograd convolution algorithm with quantization to improve inference efficiency. However, we observe that naively combining quantization and Winograd convolution faces the following challenges: 1) From the inference perspective, Winograd transformations introduce extensive additions and require frequent bit width conversions to avoid inference overflow, leading to non-negligible communication overhead; 2) From the training perspective, Winograd transformations introduce weight outliers that make quantization-aware training (QAT) difficult, resulting in inferior model accuracy. To address these challenges, we co-optimize both protocol and algorithm. 1) At the protocol level, we propose a series of graph-level optimizations for 2PC inference to minimize the communication. 2) At the algorithm level, we develop a mixed-precision QAT algorithm based on layer sensitivity to optimize model accuracy given communication constraints. To accommodate the outliers, we further introduce a 2PC-friendly bit re-weighting algorithm to increase the representation range without explicitly increasing bit widths. With extensive experiments, UFO demonstrates 11.7x, 3.6x, and 6.3x communication reduction with 1.29%, 1.16%, and 1.29% higher accuracy compared to state-of-the-art frameworks SiRNN, COINN, and CoPriv, respectively.
Problem

Research questions and friction points this paper is trying to address.

private inference
secure two-party computation
quantization
Winograd convolution
communication overhead
Innovation

Methods, ideas, or system contributions that make the work stand out.

co-optimization
quantized private inference
Winograd convolution
mixed-precision QAT
secure two-party computation
🔎 Similar Papers
No similar papers found.