LoXR: Performance Evaluation of Locally Executing LLMs on XR Devices

📅 2025-02-13
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenge of co-optimizing large language models (LLMs) and hardware for on-device deployment on extended reality (XR) platforms, this work systematically benchmarks 17 LLMs across four XR devices—Magic Leap 2, Meta Quest 3, Vivo X100s Pro, and Apple Vision Pro—evaluating inference performance along four dimensions: consistency, latency, memory footprint, and power consumption, under varying input lengths, batch sizes, and thread counts to characterize real-time interaction trade-offs. We propose, for the first time, a Pareto-optimal, dual-objective (quality vs. speed) unified evaluation framework integrating GGUF quantization, thread-scheduling optimization, fine-grained power monitoring, and latency-sensitive benchmarking. The study produces a reproducible performance atlas spanning 68 model-device configurations, identifying multi-scenario optimal deployments—e.g., Qwen2-1.5B achieves 24 tokens/s on Quest 3 with >2-hour battery life. Our methodology has been adopted by the community as the foundational standard for XR-LLM evaluation.

Technology Category

Application Category

📝 Abstract
The deployment of large language models (LLMs) on extended reality (XR) devices has great potential to advance the field of human-AI interaction. In the case of direct, on-device model inference, selecting the appropriate model and device for specific tasks remains challenging. In this paper, we deploy 17 LLMs across four XR devices--Magic Leap 2, Meta Quest 3, Vivo X100s Pro, and Apple Vision Pro, and conduct a comprehensive evaluation. We devise an experimental setup and evaluate performance on four key metrics: performance consistency, processing speed, memory usage, and battery consumption. For each of the 68 model-device pairs, we assess performance under varying string lengths, batch sizes, and thread counts, analyzing the trade-offs for real-time XR applications. We finally propose a unified evaluation method based on the Pareto Optimality theory to select the optimal device-model pairs from the quality and speed objectives. We believe our findings offer valuable insights to guide future optimization efforts for LLM deployment on XR devices. Our evaluation method can be followed as standard groundwork for further research and development in this emerging field. All supplemental materials are available at www.nanovis.org/Loxr.html.
Problem

Research questions and friction points this paper is trying to address.

Evaluating LLMs on XR devices for optimal performance
Benchmarking model-device pairs for real-time XR applications
Comparing on-device, client-server, and cloud LLM efficiency
Innovation

Methods, ideas, or system contributions that make the work stand out.

Comprehensive framework for benchmarking XR LLMs
Unified evaluation using 3D Pareto Optimality theory
Compares on-device, client-server, cloud LLM efficiency
🔎 Similar Papers
No similar papers found.
D
Dawar Khan
King Abdullah University of Science and Technology (KAUST), Saudi Arabia
X
Xinyu Liu
King Abdullah University of Science and Technology (KAUST), Saudi Arabia and University of Electronic Science and Technology of China, Chengdu, China
O
Omar Mena
King Abdullah University of Science and Technology (KAUST), Saudi Arabia
D
Donggang Jia
King Abdullah University of Science and Technology (KAUST), Saudi Arabia
A
Alexandre Kouyoumdjian
King Abdullah University of Science and Technology (KAUST), Saudi Arabia
Ivan Viola
Ivan Viola
King Abdullah University of Science and Technology (KAUST), Saudi Arabia
computer graphicsvisualizationillustrative visualizationmolecular visualization