🤖 AI Summary
Existing large language model (LLM) benchmarks are primarily designed for server- or desktop-class environments, neglecting the stringent resource constraints (e.g., compute, memory, energy), distinct interaction paradigms, and privacy requirements inherent to mobile devices. Method: We introduce Mobile-MMLU—the first large-scale language understanding benchmark tailored for mobile platforms—comprising 16,186 multiple-choice questions across 80 real-world scenarios. We propose a mobile-native evaluation paradigm, define a high-difficulty subset (Mobile-MMLU-Pro), and establish a three-dimensional evaluation framework spanning device constraints, user interaction, and privacy adaptation. We further provide a lightweight on-device profiling toolchain and a compliant privacy testing protocol. Contribution/Results: Mobile-MMLU enables holistic optimization of efficiency, accuracy, and privacy for mobile LLMs. All data, code, and tools are publicly released under an open-source license.
📝 Abstract
Rapid advancements in large language models (LLMs) have increased interest in deploying them on mobile devices for on-device AI applications. Mobile users interact differently with LLMs compared to desktop users, creating unique expectations and data biases. Current benchmark datasets primarily target at server and desktop environments, and there is a notable lack of extensive datasets specifically designed for mobile contexts. Additionally, mobile devices face strict limitations in storage and computing resources, constraining model size and capabilities, thus requiring optimized efficiency and prioritized knowledge. To address these challenges, we introduce Mobile-MMLU, a large-scale benchmark dataset tailored for mobile intelligence. It consists of 16,186 questions across 80 mobile-related fields, designed to evaluate LLM performance in realistic mobile scenarios. A challenging subset, Mobile-MMLU-Pro, provides advanced evaluation similar in size to MMLU-Pro but significantly more difficult than our standard full set. Both benchmarks use multiple-choice, order-invariant questions focused on practical mobile interactions, such as recipe suggestions, travel planning, and essential daily tasks. The dataset emphasizes critical mobile-specific metrics like inference latency, energy consumption, memory usage, and response quality, offering comprehensive insights into model performance under mobile constraints. Moreover, it prioritizes privacy and adaptability, assessing models' ability to perform on-device processing, maintain user privacy, and adapt to personalized usage patterns. Mobile-MMLU family offers a standardized framework for developing and comparing mobile-optimized LLMs, enabling advancements in productivity and decision-making within mobile computing environments. Our code and data are available at: https://github.com/VILA-Lab/Mobile-MMLU.