Mobile-MMLU: A Mobile Intelligence Language Understanding Benchmark

📅 2025-03-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing large language model (LLM) benchmarks are primarily designed for server- or desktop-class environments, neglecting the stringent resource constraints (e.g., compute, memory, energy), distinct interaction paradigms, and privacy requirements inherent to mobile devices. Method: We introduce Mobile-MMLU—the first large-scale language understanding benchmark tailored for mobile platforms—comprising 16,186 multiple-choice questions across 80 real-world scenarios. We propose a mobile-native evaluation paradigm, define a high-difficulty subset (Mobile-MMLU-Pro), and establish a three-dimensional evaluation framework spanning device constraints, user interaction, and privacy adaptation. We further provide a lightweight on-device profiling toolchain and a compliant privacy testing protocol. Contribution/Results: Mobile-MMLU enables holistic optimization of efficiency, accuracy, and privacy for mobile LLMs. All data, code, and tools are publicly released under an open-source license.

Technology Category

Application Category

📝 Abstract
Rapid advancements in large language models (LLMs) have increased interest in deploying them on mobile devices for on-device AI applications. Mobile users interact differently with LLMs compared to desktop users, creating unique expectations and data biases. Current benchmark datasets primarily target at server and desktop environments, and there is a notable lack of extensive datasets specifically designed for mobile contexts. Additionally, mobile devices face strict limitations in storage and computing resources, constraining model size and capabilities, thus requiring optimized efficiency and prioritized knowledge. To address these challenges, we introduce Mobile-MMLU, a large-scale benchmark dataset tailored for mobile intelligence. It consists of 16,186 questions across 80 mobile-related fields, designed to evaluate LLM performance in realistic mobile scenarios. A challenging subset, Mobile-MMLU-Pro, provides advanced evaluation similar in size to MMLU-Pro but significantly more difficult than our standard full set. Both benchmarks use multiple-choice, order-invariant questions focused on practical mobile interactions, such as recipe suggestions, travel planning, and essential daily tasks. The dataset emphasizes critical mobile-specific metrics like inference latency, energy consumption, memory usage, and response quality, offering comprehensive insights into model performance under mobile constraints. Moreover, it prioritizes privacy and adaptability, assessing models' ability to perform on-device processing, maintain user privacy, and adapt to personalized usage patterns. Mobile-MMLU family offers a standardized framework for developing and comparing mobile-optimized LLMs, enabling advancements in productivity and decision-making within mobile computing environments. Our code and data are available at: https://github.com/VILA-Lab/Mobile-MMLU.
Problem

Research questions and friction points this paper is trying to address.

Lack of mobile-specific benchmark datasets for LLMs
Mobile devices face strict storage and computing limitations
Need for privacy and adaptability in mobile AI applications
Innovation

Methods, ideas, or system contributions that make the work stand out.

Mobile-specific benchmark dataset for LLMs
Evaluates efficiency under mobile constraints
Prioritizes privacy and on-device processing
🔎 Similar Papers
No similar papers found.
S
S. Mahmoud Bsharat
VILA Lab, MBZUAI
Mukul Ranjan
Mukul Ranjan
Researcher, MBZUAI
Machine Learning
Aidar Myrzakhan
Aidar Myrzakhan
MSc Student, Mohamed Bin Zayed University of Artificial Intelligence
Machine LearningLLMDeep Learning
J
Jiacheng Liu
VILA Lab, MBZUAI
B
Bowei Guo
VILA Lab, MBZUAI
S
Shengkun Tang
VILA Lab, MBZUAI
Z
Zhuang Liu
Princeton University
Yuanzhi Li
Yuanzhi Li
Assistant Professor at CMU
Machine Learning
Z
Zhiqiang Shen
VILA Lab, MBZUAI