🤖 AI Summary
Existing AI system benchmarks (e.g., MLPerf) struggle to keep pace with rapid technological evolution, hindering informed deployment decisions and hardware-software co-optimization.
Method: We propose FlexBench—a framework that reframes benchmarking as a continual learning task—systematically evaluating large language model (LLM) inference performance across diverse software-hardware configurations. It supports multi-objective trade-offs among accuracy, latency, throughput, energy consumption, and cost.
Contribution/Results: FlexBench extends the MLPerf LLM benchmark with deep Hugging Face ecosystem integration and introduces the first collaborative, evolving Open MLPerf Dataset—a structured, extensible resource for predictive modeling and feature engineering. Experiments demonstrate reproducible, scalable evaluation of DeepSeek-R1 and LLaMA-3.3 on commodity servers, significantly enhancing efficiency in AI system co-design.
📝 Abstract
Existing AI system benchmarks such as MLPerf often struggle to keep pace with the rapidly evolving AI landscape, making it difficult to support informed deployment, optimization, and co-design decisions for AI systems. We suggest that benchmarking itself can be framed as an AI task - one in which models are continuously evaluated and optimized across diverse datasets, software, and hardware, using key metrics such as accuracy, latency, throughput, energy consumption, and cost. To support this perspective, we present FlexBench: a modular extension of the MLPerf LLM inference benchmark, integrated with HuggingFace and designed to provide relevant and actionable insights. Benchmarking results and metadata are collected into an Open MLPerf Dataset, which can be collaboratively curated, extended, and leveraged for predictive modeling and feature engineering. We successfully validated the FlexBench concept through MLPerf Inference submissions, including evaluations of DeepSeek R1 and LLaMA 3.3 on commodity servers. The broader objective is to enable practitioners to make cost-effective AI deployment decisions that reflect their available resources, requirements, and constraints.