EFIM: Efficient Serving of LLMs for Infilling Tasks with Improved KV Cache Reuse

📅 2025-05-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In large language model (LLM) fill-in-the-middle (FIM) tasks, standard prompt structures cause frequent invalidation of prefix/suffix key-value (KV) caches and severely limit cross-request cache reuse. Method: We propose EFIM—a cache-efficient FIM prompt format that decouples dependencies among prefix, suffix, and insertion point—and fragment tokenization, a training paradigm that explicitly models subword boundaries to mitigate cache invalidation from inconsistent tokenization. Contribution/Results: EFIM and fragment tokenization jointly optimize KV cache reuse efficiency, achieving an average 52% latency reduction and 98% throughput improvement on two mainstream LLMs, while strictly preserving original completion quality. To our knowledge, this is the first work to jointly model prompt structure design, tokenization mechanism, and KV cache optimization—establishing a novel paradigm for accelerating LLM serving inference.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) are often used for infilling tasks, which involve predicting or generating missing information in a given text. These tasks typically require multiple interactions with similar context. To reduce the computation of repeated historical tokens, cross-request key-value (KV) cache reuse, a technique that stores and reuses intermediate computations, has become a crucial method in multi-round interactive services. However, in infilling tasks, the KV cache reuse is often hindered by the structure of the prompt format, which typically consists of a prefix and suffix relative to the insertion point. Specifically, the KV cache of the prefix or suffix part is frequently invalidated as the other part (suffix or prefix) is incrementally generated. To address the issue, we propose EFIM, a transformed prompt format of FIM to unleash the performance potential of KV cache reuse. Although the transformed prompt can solve the inefficiency, it exposes subtoken generation problems in current LLMs, where they have difficulty generating partial words accurately. Therefore, we introduce a fragment tokenization training method which splits text into multiple fragments before tokenization during data processing. Experiments on two representative LLMs show that LLM serving with EFIM can lower the latency by 52% and improve the throughput by 98% while maintaining the original infilling capability. EFIM's source code is publicly available at https://github.com/gty111/EFIM.
Problem

Research questions and friction points this paper is trying to address.

Improving KV cache reuse for LLM infilling tasks
Addressing subtoken generation issues in transformed prompts
Enhancing latency and throughput in multi-round interactive services
Innovation

Methods, ideas, or system contributions that make the work stand out.

Transformed prompt format for KV cache reuse
Fragment tokenization training for partial words
Improved latency and throughput in LLM serving
🔎 Similar Papers
T
Tianyu Guo
Sun Yat-sen University, Guangzhou, China
Hande Dong
Hande Dong
Tencent
machine learningdata miningNLP
Yichong Leng
Yichong Leng
University of Science and Technology of China
Speech ProcessingNLP
F
Feng Liu
Tencent, Shenzhen, China
C
Cheater Lin
Tencent, Shenzhen, China
N
Nong Xiao
Sun Yat-sen University, Guangzhou, China
Xianwei Zhang
Xianwei Zhang
Sun Yat-sen U.; AMD Research/RTG
Architecture/SystemCompilationGPU/MemoryHPCSimulation/Modeling