🤖 AI Summary
This work addresses the inefficiency of fully homomorphic encryption (FHE) computations and the difficulty of manually deriving optimal program transformations, which typically require expert knowledge. To overcome these challenges, the paper introduces deep reinforcement learning for the first time to automate FHE program optimization. A trained agent learns sequences of rewrite rules that efficiently transform scalar operations into vectorized equivalents, reducing both instruction latency and ciphertext noise growth while supporting both structured and unstructured code. Diverse training data are synthesized using a large language model and integrated into an FHE compiler. The resulting optimized code achieves a 5.3× speedup in execution time, a 2.54× reduction in noise accumulation, and a 27.9× improvement in compilation speed (geometric mean) compared to prior approaches.
📝 Abstract
Fully Homomorphic Encryption (FHE) enables computations directly on encrypted data, but its high computational cost remains a significant barrier. Writing efficient FHE code is a complex task requiring cryptographic expertise, and finding the optimal sequence of program transformations is often intractable. In this paper, we propose CHEHAB RL, a novel framework that leverages deep reinforcement learning (RL) to automate FHE code optimization. Instead of relying on predefined heuristics or combinatorial search, our method trains an RL agent to learn an effective policy for applying a sequence of rewriting rules to automatically vectorize scalar FHE code while reducing instruction latency and noise growth. The proposed approach supports the optimization of both structured and unstructured code. To train the agent, we synthesize a diverse dataset of computations using a large language model (LLM). We integrate our proposed approach into the CHEHAB FHE compiler and evaluate it on a suite of benchmarks, comparing its performance against Coyote, a state-of-the-art vectorizing FHE compiler. The results show that our approach generates code that is $5.3\times$ faster in execution, accumulates $2.54\times$ less noise, while the compilation process itself is $27.9\times$ faster than Coyote (geometric means).