Network and Compiler Optimizations for Efficient Linear Algebra Kernels in Private Transformer Inference

📅 2025-12-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Public cloud deployment of LLMs raises severe data privacy risks due to plaintext query uploads. Method: This paper proposes a private Transformer inference optimization framework tailored for fully homomorphic encryption (FHE), specifically targeting the high computational overhead of linear algebra kernels under the CKKS scheme. It innovatively replaces conventional packed-row encoding with Baby-Step Giant-Step (BSGS) encoding—first applied to Transformer linear transformations—and introduces network-level structural pruning. We extend the Orion compiler to support ciphertext–ciphertext matrix multiplication and employ Roofline modeling to reveal the memory-bound nature of FHE linear kernels, thereby driving a paradigm shift in CKKS encoding. Contributions/Results: BSGS achieves 13.7× speedup; FHE inference of the feed-forward layer accelerates by 11.46×; our work delivers the first Orion extension supporting ciphertext matrix multiplication; and the integer operation density is reduced to only 0.1 OP/byte.

Technology Category

Application Category

📝 Abstract
Large language model (LLM) based services are primarily structured as client-server interactions, with clients sending queries directly to cloud providers that host LLMs. This approach currently compromises data privacy as all queries must be processed in the cloud and in the clear. Fully Homomorphic Encryption (FHE) is a solution to this data privacy issue by enabling computations directly upon encrypted queries. However, running encrypted transformer inference is challenging as programmers must map standard kernels to the constrained instruction set provided by FHE. In this work, we explore implementations of linear algebra kernels needed for transformer inference in FHE and understand how network optimization can help mitigate FHE costs while remaining performant. We leverage the Orion PyTorch to FHE framework to benchmark several linear algebra kernels in order to profile two linear transformation methods, packed row and BSGS, and find that BSGS outperforms packed row methods by up to $13.7 imes$ at transformer-level scales. We also incorporate network-level pruning strategies that reduce FHE runtimes of feed forward layers by up to $11.46 imes$. Furthermore, we extend Orion to include ciphertext-ciphertext matrix-matrix products, a key component in the self-attention blocks. Finally, we perform a roofline analysis of FHE primitives and encrypted linear transformations and find that (SIMD encoded) implementations are memory-bound with primitives having roughly $0.1$ integer operations per byte of DRAM traffic. These findings illustrate the need for exploring alternative encoding schemes and models of computation within CKKS to unlock scalable private transformer inference. We conduct all experiments using the Orion framework which can be found at: https://github.com/baahl-nyu/orion.
Problem

Research questions and friction points this paper is trying to address.

Optimizes linear algebra kernels for efficient private transformer inference using FHE.
Explores network optimization to reduce FHE costs while maintaining performance.
Addresses challenges in mapping transformer kernels to constrained FHE instruction sets.
Innovation

Methods, ideas, or system contributions that make the work stand out.

BSGS method outperforms packed row by 13.7x
Network pruning reduces feed-forward runtime by 11.46x
Extends Orion with ciphertext-ciphertext matrix products
🔎 Similar Papers
No similar papers found.