ProfInfer: An eBPF-based Fine-Grained LLM Inference Profiler

📅 2026-01-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the lack of fine-grained runtime observability in existing large language model (LLM) inference engines, which hinders precise identification of performance bottlenecks. To overcome this limitation, the authors introduce eBPF technology into LLM inference analysis for the first time, proposing a non-intrusive profiling framework that dynamically attaches function probes to capture multidimensional data—including operator execution, computation graph structure, timelines, and hardware performance counters—without requiring source code modifications. The approach enables high-fidelity observation of complex behaviors such as Mixture-of-Experts (MoE) routing and operator offloading, with a runtime overhead of less than 4%. This framework provides an efficient and practical diagnostic capability to support inference optimization, scheduling strategies, and resource-aware deployment.

Technology Category

Application Category

📝 Abstract
As large language models (LLMs) move from research to production, understanding how inference engines behave in real time has become both essential and elusive. Unlike general-purpose engines such as ONNX Runtime, today's LLM inference systems offer little operator-level visibility, leaving developers blind to where time and resources go. Even basic questions -- is this workload memory-bound or compute-bound? -- often remain unanswered. To close this gap, we develop a fine-grained, non-intrusive profiling framework for modern LLM inference engines, exemplified by llama-cpp but applicable to similar runtime architectures. Built on extended Berkeley Packet Filter (eBPF) technology, our system dynamically attaches probes to runtime functions across multiple layers -- without modifying or recompiling the source. It transforms collected traces into rich visualizations of operators, graphs, timelines, and hardware counter trends, exposing how dense inference, Mixture-of-Experts routing, and operator offloading behave in practice. With less than 4% runtime overhead and high profiling fidelity, our framework makes LLM inference both transparent and diagnosable, turning performance profiling into a practical tool for optimization, scheduling, and resource-aware deployment.
Problem

Research questions and friction points this paper is trying to address.

LLM inference
performance profiling
operator-level visibility
runtime observability
resource utilization
Innovation

Methods, ideas, or system contributions that make the work stand out.

eBPF
LLM inference profiling
non-intrusive tracing
fine-grained observability
Mixture-of-Experts
🔎 Similar Papers
No similar papers found.
B
Bohua Zou
Huawei Hilbert Research Center (Dresden), Dresden, Germany; Technical University of Munich, Munich, Germany
Debayan Roy
Debayan Roy
Principal OS kernel Researcher, Huawei
Cyber Physical SystemsEmbedded and Real-Time SystemsControl-Platform Co-DesignAutomotive
D
Dhimankumar Yogesh Airao
Huawei Hilbert Research Center (Dresden), Dresden, Germany
W
Weihao Xu
Technical University of Munich, Munich, Germany
Binqi Sun
Binqi Sun
Technical University of Munich
SchedulingOptimizationReal-Time SystemsCyber-Physical Systems
Y
Yutao Liu
Huawei Hilbert Research Center (Dresden), Dresden, Germany
Haibo Chen
Haibo Chen
FACM & FIEEE, Distinguished Professor, Shanghai Jiao Tong University
Operating SystemsMLSysDistributed Systems