🤖 AI Summary
To address the challenges of excessive experimental scale, high resource consumption, and the trade-off between accuracy and efficiency in system-level LLM inference performance evaluation (e.g., throughput, latency), this paper proposes FMwork—a framework for efficient and reliable benchmarking. FMwork establishes a controlled test environment, introduces meta-metrics to quantify the cost–accuracy trade-off, designs a parameter selection strategy grounded in hardware–software interaction characteristics, and formulates a joint cost–performance optimization model. It achieves 96.6% accuracy relative to full-scale testing with only minimal samples—e.g., just 128 output tokens for Llama 3.1 8B—while improving experimental efficiency by up to 24× and delivering an additional 2.7× inference acceleration. Its core contribution is the first introduction of a meta-metric-driven sparse evaluation paradigm for LLM inference benchmarking, significantly enhancing scalability and reliability in large-scale performance analysis.
📝 Abstract
Benchmarking inference performance (speed) of Foundation Models such as Large Language Models (LLM) involves navigating a vast experimental landscape to understand the complex interactions between hardware and software components. However, evaluating every possible test configuration is impractical, unfeasible and unnecessary. To address this challenge, we introduce FMwork, a comprehensive and methodical approach to creating a controlled testing environment that accurately reflects and characterizes performance. FMwork comprises a set of benchmkaring best practices with three key components: 1) meta-metrics, 2) parameter selection, and 3) strategic cost-performance evaluation. Meta-metrics account for time and resources spent on benchmarking and the relative accuracy of the results compared to a larger body of measurements, representing the complete experimental space. FMwork operationalizes the meta-metrics and provides efficient strategies for parameter selection and cost-performance analysis. Using the framework, we show up to 24x improvement (speedup and/or resource savings) running sweeps of experiments compared to the ground truth. Even already considering a subset of experiments as reference point (using the power of two for batch sizes), reducing experimental output size from 1024 to 128 tokens yields another 2.7x gain while keeping 96.6% accuracy for an evaluation using Llama 3.1 8B model.