RooflineBench: A Benchmarking Framework for On-Device LLMs via Roofline Analysis

📅 2026-02-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of uniformly evaluating inference efficiency of small language models (SLMs) on resource-constrained edge devices, where objective cross-hardware benchmarks remain scarce. To this end, we propose a systematic evaluation framework grounded in the Roofline model, introducing a novel metric—relative inference potential—that leverages operational intensity (OI) to jointly characterize hardware constraints and model architecture, thereby defining distinct inference potential regions. Empirical analysis reveals how sequence length and model depth influence performance and OI, uncovers efficiency pitfalls arising from hardware heterogeneity, and demonstrates that architectural optimizations such as Multi-Head Latent Attention (MLA) can effectively unlock hardware potential. This study provides both theoretical foundations and practical guidance for hardware-software co-design in edge-side intelligence.

Technology Category

Application Category

📝 Abstract
The transition toward localized intelligence through Small Language Models (SLMs) has intensified the need for rigorous performance characterization on resource-constrained edge hardware. However, objectively measuring the theoretical performance ceilings of diverse architectures across heterogeneous platforms remains a formidable challenge. In this work, we propose a systematic framework based on the Roofline model that unifies architectural primitives and hardware constraints through the lens of operational intensity (OI). By defining an inference-potential region, we introduce the Relative Inference Potential as a novel metric to compare efficiency differences between Large Language Models (LLMs) on the same hardware substrate. Extensive empirical analysis across diverse compute tiers reveals that variations in performance and OI are significantly influenced by sequence length. We further identify a critical regression in OI as model depth increases. Additionally, our findings highlight an efficiency trap induced by hardware heterogeneity and demonstrate how structural refinements, such as Multi-head Latent Attention (M LA), can effectively unlock latent inference potential across various hardware substrates. These insights provide actionable directions for hardware-software co-design to align neural structures with physical constraints in on-device intelligence. The released code is available in the Appendix C.
Problem

Research questions and friction points this paper is trying to address.

on-device LLMs
Roofline analysis
performance characterization
hardware heterogeneity
operational intensity
Innovation

Methods, ideas, or system contributions that make the work stand out.

Roofline model
on-device LLMs
operational intensity
Relative Inference Potential
hardware-software co-design
🔎 Similar Papers
No similar papers found.