🤖 AI Summary
To address the challenge of co-optimizing large language models (LLMs) and hardware for on-device deployment on extended reality (XR) platforms, this work systematically benchmarks 17 LLMs across four XR devices—Magic Leap 2, Meta Quest 3, Vivo X100s Pro, and Apple Vision Pro—evaluating inference performance along four dimensions: consistency, latency, memory footprint, and power consumption, under varying input lengths, batch sizes, and thread counts to characterize real-time interaction trade-offs. We propose, for the first time, a Pareto-optimal, dual-objective (quality vs. speed) unified evaluation framework integrating GGUF quantization, thread-scheduling optimization, fine-grained power monitoring, and latency-sensitive benchmarking. The study produces a reproducible performance atlas spanning 68 model-device configurations, identifying multi-scenario optimal deployments—e.g., Qwen2-1.5B achieves 24 tokens/s on Quest 3 with >2-hour battery life. Our methodology has been adopted by the community as the foundational standard for XR-LLM evaluation.
📝 Abstract
The deployment of large language models (LLMs) on extended reality (XR) devices has great potential to advance the field of human-AI interaction. In the case of direct, on-device model inference, selecting the appropriate model and device for specific tasks remains challenging. In this paper, we deploy 17 LLMs across four XR devices--Magic Leap 2, Meta Quest 3, Vivo X100s Pro, and Apple Vision Pro, and conduct a comprehensive evaluation. We devise an experimental setup and evaluate performance on four key metrics: performance consistency, processing speed, memory usage, and battery consumption. For each of the 68 model-device pairs, we assess performance under varying string lengths, batch sizes, and thread counts, analyzing the trade-offs for real-time XR applications. We finally propose a unified evaluation method based on the Pareto Optimality theory to select the optimal device-model pairs from the quality and speed objectives. We believe our findings offer valuable insights to guide future optimization efforts for LLM deployment on XR devices. Our evaluation method can be followed as standard groundwork for further research and development in this emerging field. All supplemental materials are available at www.nanovis.org/Loxr.html.