Encrypted Large Model Inference: The Equivariant Encryption Paradigm

📅 2025-02-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In multi-user settings, deploying large models (e.g., LLMs, diffusion models) on untrusted platforms faces a fundamental privacy–efficiency trade-off. Method: This paper introduces Equivariant Encryption (EE), a novel paradigm grounded in the theory of function equivariance under group actions; EE selectively obfuscates critical layer representations while enabling exact ciphertext computation for linear and prescribed nonlinear operations—bypassing the prohibitive overhead of fully homomorphic encryption (FHE). EE is architecture-agnostic, supporting CNNs, Transformers, and others, and integrates seamlessly into standard inference pipelines. Contribution/Results: Experiments demonstrate end-to-end privacy preservation in decentralized scenarios: inputs, intermediate activations, and outputs remain confidential throughout inference. Accuracy is preserved losslessly; throughput approaches plaintext-level performance, with latency overhead <0.5%. EE significantly outperforms secure multi-party computation (SMPC) and FHE in both efficiency and practicality.

Technology Category

Application Category

📝 Abstract
Large scale deep learning model, such as modern language models and diffusion architectures, have revolutionized applications ranging from natural language processing to computer vision. However, their deployment in distributed or decentralized environments raises significant privacy concerns, as sensitive data may be exposed during inference. Traditional techniques like secure multi-party computation, homomorphic encryption, and differential privacy offer partial remedies but often incur substantial computational overhead, latency penalties, or limited compatibility with non-linear network operations. In this work, we introduce Equivariant Encryption (EE), a novel paradigm designed to enable secure,"blind"inference on encrypted data with near zero performance overhead. Unlike fully homomorphic approaches that encrypt the entire computational graph, EE selectively obfuscates critical internal representations within neural network layers while preserving the exact functionality of both linear and a prescribed set of non-linear operations. This targeted encryption ensures that raw inputs, intermediate activations, and outputs remain confidential, even when processed on untrusted infrastructure. We detail the theoretical foundations of EE, compare its performance and integration complexity against conventional privacy preserving techniques, and demonstrate its applicability across a range of architectures, from convolutional networks to large language models. Furthermore, our work provides a comprehensive threat analysis, outlining potential attack vectors and baseline strategies, and benchmarks EE against standard inference pipelines in decentralized settings. The results confirm that EE maintains high fidelity and throughput, effectively bridging the gap between robust data confidentiality and the stringent efficiency requirements of modern, large scale model inference.
Problem

Research questions and friction points this paper is trying to address.

Privacy Protection
Deep Learning Models
Data Security
Innovation

Methods, ideas, or system contributions that make the work stand out.

Equivariant Encryption
Large-scale Model Efficiency
Privacy-preserving Inference
🔎 Similar Papers
No similar papers found.