Compiling Code LLMs into Lightweight Executables

📅 2026-03-31
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Deploying large code language models on resource-constrained local devices remains challenging due to hardware limitations, which compromises privacy preservation, inference latency, and offline availability. This work proposes Ditto, a method that co-optimizes model compression and inference program generation to compile large code models into lightweight executables suitable for statically typed languages such as C. Ditto innovatively integrates bounded-error product quantization with quantization-aware inference program synthesis and extends the LLVM compiler to automatically replace general matrix-vector (GEMV) operations with efficient BLAS calls. Evaluated across three prominent code language models, Ditto achieves up to 10.5× speedup and 6.4× reduction in memory footprint, with an average drop of only 0.27% in pass@1 accuracy.
📝 Abstract
The demand for better prediction accuracy and higher execution performance in neural networks continues to grow. The emergence and success of Large Language Models (LLMs) have led to the development of many cloud-based tools for software engineering tasks such as code suggestion. While effective, cloud deployment raises concerns over privacy, latency, and reliance on connectivity. Running LLMs locally on personal devices such as laptops would address these issues by enabling offline use and reducing response time. However, local deployment is challenging: commodity devices lack high-performance accelerators like GPUs and are constrained by limited memory and compute capacity, making it difficult to execute large models efficiently. We present Ditto, a novel method for optimizing both the model size of Code LLMs and their inference programs, particularly for statically-typed programming languages such as C. Our approach integrates two key components: (1) a model compression technique inspired by product quantization, which clusters model parameters into codebooks and quantizes them to lower bit widths while ensuring that outputs remain within a bounded error, as well as synthesizing the inference program for the quantized model; and (2) a compilation pass integrated into LLVM that automatically detects and replaces unoptimized General Matrix-Vector Multiplication (GEMV) operations with implementations from Basic Linear Algebra Subprograms (BLAS) libraries, which are highly optimized for runtime performance. The output of Ditto is an optimized and compiled executable for running selected Code LLMs. We evaluate Ditto on three popular Code LLMs, achieving up to 10.5$\times$ faster inference and 6.4$\times$ lower memory usage compared with their original inference pipeline, while maintaining accuracy close to that of the full-precision models (with an average loss of only 0.27% in pass@1).
Problem

Research questions and friction points this paper is trying to address.

Code LLMs
local deployment
resource-constrained devices
model efficiency
privacy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Code LLMs
model compression
product quantization
LLVM compilation
GEMV optimization
🔎 Similar Papers
No similar papers found.