lm-Meter: Unveiling Runtime Inference Latency for On-Device Language Models

📅 2025-10-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenges of inaccurate latency measurement and insufficient optimization guidance for on-device LLM inference on mobile/edge devices, this paper proposes the first lightweight, fully client-side online latency analysis framework. It enables real-time, phase-level (embedding, prefill, decoding, softmax, sampling) and kernel-level profiling—without requiring external instrumentation or runtime modifications—thereby minimizing system overhead. On commercial mobile platforms, it incurs only a 2.58% throughput reduction during prefill and 0.99% during decoding under Powersave mode, while enabling high-precision bottleneck identification and quantitative trade-off analysis between efficiency and accuracy. The core contribution is the first low-overhead, fine-grained, full-stack on-device latency monitoring solution for LLM inference, establishing a reproducible foundation for performance insights, model deployment, and systems optimization in resource-constrained environments.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) are increasingly integrated into everyday applications, but their prevalent cloud-based deployment raises growing concerns around data privacy and long-term sustainability. Running LLMs locally on mobile and edge devices (on-device LLMs) offers the promise of enhanced privacy, reliability, and reduced communication costs. However, realizing this vision remains challenging due to substantial memory and compute demands, as well as limited visibility into performance-efficiency trade-offs on resource-constrained hardware. We propose lm-Meter, the first lightweight, online latency profiler tailored for on-device LLM inference. lm-Meter captures fine-grained, real-time latency at both phase (e.g., embedding, prefill, decode, softmax, sampling) and kernel levels without auxiliary devices. We implement lm-Meter on commercial mobile platforms and demonstrate its high profiling accuracy with minimal system overhead, e.g., only 2.58% throughput reduction in prefill and 0.99% in decode under the most constrained Powersave governor. Leveraging lm-Meter, we conduct comprehensive empirical studies revealing phase- and kernel-level bottlenecks in on-device LLM inference, quantifying accuracy-efficiency trade-offs, and identifying systematic optimization opportunities. lm-Meter provides unprecedented visibility into the runtime behavior of LLMs on constrained platforms, laying the foundation for informed optimization and accelerating the democratization of on-device LLM systems. Code and tutorials are available at https://github.com/amai-gsu/LM-Meter.
Problem

Research questions and friction points this paper is trying to address.

Profiling runtime latency for on-device LLM inference
Identifying performance bottlenecks in constrained hardware environments
Quantifying accuracy-efficiency trade-offs for mobile LLM optimization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Lightweight online profiler for on-device LLM latency
Captures fine-grained real-time latency at multiple levels
Enables bottleneck identification and systematic optimization opportunities
🔎 Similar Papers
No similar papers found.
H
Haoxin Wang
Georgia State University, Atlanta, GA, USA
X
Xiaolong Tu
Georgia State University, Atlanta, GA, USA
H
Hongyu Ke
Georgia State University, Atlanta, GA, USA
H
Huirong Chai
Georgia State University, Atlanta, GA, USA
D
Dawei Chen
Toyota InfoTech Labs, Mountain View, CA, USA
Kyungtae Han
Kyungtae Han
InfoTech Labs, Toyota Motor North America
Generative AIAI AgentsConnected Automated VehiclesADASITS