SecFormer: Fast and Accurate Privacy-Preserving Inference for Transformer Models via SMPC

๐Ÿ“… 2024-01-01
๐Ÿ›๏ธ Annual Meeting of the Association for Computational Linguistics
๐Ÿ“ˆ Citations: 9
โœจ Influential: 1
๐Ÿ“„ PDF
๐Ÿค– AI Summary
To address privacy leakage risks of sensitive data and model parameters during Transformer inference in cloud environments, this paper proposes the first efficient SMPC-based privacy-preserving inference framework. Our method eliminates high-overhead exponential and max operations inherent in Softmax, GeLU, and LayerNorm via numerical approximation, customized nonlinear operator protocols, and deep architectural adaptation to BERT-family modelsโ€”yielding SMPC-friendly operator reconstruction. Without accuracy loss, our approach improves BERT-Base and BERT-Large classification accuracy by 3.4% and 24.7%, respectively, over MPCFormer; and achieves 3.57ร— and 3.58ร— speedup in inference latency compared to PUMA. These gains significantly alleviate the efficiency bottleneck imposed by nonlinear operations in Transformers under SMPC, advancing practical secure inference for large language models.

Technology Category

Application Category

๐Ÿ“ Abstract
With the growing use of Transformer models hosted on cloud platforms to offer inference services, privacy concerns are escalating, especially concerning sensitive data like investment plans and bank account details. Secure Multi-Party Computing (SMPC) emerges as a promising solution to protect the privacy of inference data and model parameters. However, the application of SMPC in Privacy-Preserving Inference (PPI) for Transformer models often leads to considerable slowdowns or declines in performance. This is largely due to the multitude of nonlinear operations in the Transformer architecture, which are not well-suited to SMPC and difficult to circumvent or optimize effectively. To address this concern, we introduce a comprehensive PPI framework called SecFormer to achieve fast and accurate PPI for Transformer models. We successfully eliminate the high-cost exponential and maximum operations in PPI without sacrificing model performance and develop a suite of efficient SMPC protocols by employing suitable numerical computation methods to boost other complex nonlinear functions in PPI, including GeLU, LayerNorm, and a redesigned Softmax. Our extensive experiments reveal that SecFormer outperforms MPCFormer in performance, showing improvements of $3.4%$ and $24.7%$ for BERT$_{ ext{BASE}}$ and BERT$_{ ext{LARGE}}$, respectively. In terms of efficiency, SecFormer is 3.57 and 3.58 times faster than PUMA for BERT$_{ ext{BASE}}$ and BERT$_{ ext{LARGE}}$, demonstrating its effectiveness and speed.
Problem

Research questions and friction points this paper is trying to address.

Addresses privacy risks in cloud-based Transformer model inference
Reduces performance slowdowns in SMPC for Transformer PPI
Optimizes nonlinear operations in Transformers for efficient SMPC
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses SMPC for privacy-preserving Transformer inference
Eliminates costly exponential and maximum operations
Optimizes GeLU, LayerNorm, and Softmax via SMPC
๐Ÿ”Ž Similar Papers
No similar papers found.