Sustainability Is Not Linear: Quantifying Performance, Energy, and Privacy Trade-offs in On-Device Intelligence

📅 2026-03-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the multi-objective trade-offs among generation quality, energy consumption, latency, and memory when deploying large language models on edge devices. The authors construct a reproducible empirical evaluation framework to systematically analyze the energy efficiency, performance, and privacy characteristics of models ranging from 0.5B to 9B parameters on a real-world Android device (Samsung Galaxy S25 Ultra). Leveraging non-intrusive, fine-grained power monitoring and mixed-precision inference, they uncover a “quantization-energy paradox”: model architecture—not quantization strategy—dominates energy consumption. Notably, Mixture-of-Experts architectures disrupt conventional scaling–energy relationships, and medium-scale models such as Qwen2.5-3B emerge as the optimal choice, balancing high output quality with energy efficiency, thereby offering practical deployment guidelines for on-device intelligence.
📝 Abstract
The migration of Large Language Models (LLMs) from cloud clusters to edge devices promises enhanced privacy and offline accessibility, but this transition encounters a harsh reality: the physical constraints of mobile batteries, thermal limits, and, most importantly, memory constraints. To navigate this landscape, we constructed a reproducible experimental pipeline to profile the complex interplay between energy consumption, latency, and quality. Unlike theoretical studies, we captured granular power metrics across eight models ranging from 0.5B to 9B parameters without requiring root access, ensuring our findings reflect realistic user conditions. We harness this pipeline to conduct an empirical case study on a flagship Android device, the Samsung Galaxy S25 Ultra, establishing foundational hypotheses regarding the trade-offs between generation quality, performance, and resource consumption. Our investigation uncovered a counter-intuitive quantization-energy paradox. While modern importance-aware quantization successfully reduces memory footprints to fit larger models into RAM, we found it yields negligible energy savings compared to standard mixed-precision methods. This proves that for battery life, the architecture of the model, not its quantization scheme, is the decisive factor. We further identified that Mixture-of-Experts (MoE) architectures defy the standard size-energy trend, offering the storage capacity of a 7B model while maintaining the lower energy profile of a 1B to 2B model. Finally, an analysis of these multi-objective trade-offs reveals a pragmatic sweet spot of mid-sized models, such as Qwen2.5-3B, that effectively balance response quality with sustainable energy consumption.
Problem

Research questions and friction points this paper is trying to address.

on-device intelligence
energy consumption
model quantization
memory constraints
performance trade-offs
Innovation

Methods, ideas, or system contributions that make the work stand out.

on-device LLMs
energy-latency-quality trade-offs
quantization-energy paradox
Mixture-of-Experts (MoE)
sustainable AI
🔎 Similar Papers
No similar papers found.