🤖 AI Summary
This work addresses the lack of fine-grained runtime observability in existing large language model (LLM) inference engines, which hinders precise identification of performance bottlenecks. To overcome this limitation, the authors introduce eBPF technology into LLM inference analysis for the first time, proposing a non-intrusive profiling framework that dynamically attaches function probes to capture multidimensional data—including operator execution, computation graph structure, timelines, and hardware performance counters—without requiring source code modifications. The approach enables high-fidelity observation of complex behaviors such as Mixture-of-Experts (MoE) routing and operator offloading, with a runtime overhead of less than 4%. This framework provides an efficient and practical diagnostic capability to support inference optimization, scheduling strategies, and resource-aware deployment.
📝 Abstract
As large language models (LLMs) move from research to production, understanding how inference engines behave in real time has become both essential and elusive. Unlike general-purpose engines such as ONNX Runtime, today's LLM inference systems offer little operator-level visibility, leaving developers blind to where time and resources go. Even basic questions -- is this workload memory-bound or compute-bound? -- often remain unanswered. To close this gap, we develop a fine-grained, non-intrusive profiling framework for modern LLM inference engines, exemplified by llama-cpp but applicable to similar runtime architectures. Built on extended Berkeley Packet Filter (eBPF) technology, our system dynamically attaches probes to runtime functions across multiple layers -- without modifying or recompiling the source. It transforms collected traces into rich visualizations of operators, graphs, timelines, and hardware counter trends, exposing how dense inference, Mixture-of-Experts routing, and operator offloading behave in practice. With less than 4% runtime overhead and high profiling fidelity, our framework makes LLM inference both transparent and diagnosable, turning performance profiling into a practical tool for optimization, scheduling, and resource-aware deployment.