🤖 AI Summary
This work addresses the challenges of long-sequence homomorphic Transformer inference, which is hindered by ciphertext expansion causing single-GPU memory exhaustion and high multi-GPU communication overhead. The paper presents the first application-cryptography co-designed parallel inference framework that jointly analyzes Transformer dataflow and CKKS polynomial structure to devise a hybrid parallelization strategy. By integrating modulus- and token-consistent data placement, ciphertext-dependency-aware device assignment, operator reordering, and communication-computation overlap, the approach substantially reduces communication volume. On 2048-token inputs, communication in the feed-forward network and self-attention modules is reduced by 57.9% and 81.3%, respectively. The system achieves 96.62% scaling efficiency across four GPUs, delivers a 3.86× end-to-end speedup, and cuts per-GPU memory consumption by 69.1%.
📝 Abstract
Fully Homomorphic Encryption (FHE) enables privacy-preserving Transformer inference, but long-sequence encrypted Transformers quickly exceed single-GPU memory capacity because encoded weights are already large and encrypted activations grow rapidly with sequence length. Multi-GPU execution therefore becomes unavoidable, yet scaling remains challenging because communication is jointly induced by application-level aggregation and encryption-level RNS coupling. Existing approaches either synchronize between devices frequently or replicate encrypted tensors across devices, leading to excessive communication and latency.
We present AEGIS, an Application-Encryption Guided Inference System for scalable long-sequence encrypted Transformer inference on multi-GPU platforms. AEGIS derives device placement from ciphertext dependencies jointly induced by Transformer dataflow and CKKS polynomial coupling, co-locating modulus-coherent and token-coherent data so that communication is introduced only when application dependencies require it, while reordering polynomial operators to overlap the remaining collectives with computation.
On 2048-token inputs, AEGIS reduces inter-GPU communication by up to 57.9% in feed-forward networks and 81.3% in self-attention versus prior state-of-the-art designs. On four GPUs, it achieves up to 96.62% scaling efficiency, 3.86x end-to-end speedup, and 69.1% per-device memory reduction. These results establish coordinated application-encryption parallelism as a practical foundation for scalable homomorphic Transformer inference.