Metric Calculating Benchmark: Code-Verifiable Complicate Instruction Following Benchmark for Large Language Models

📅 2025-10-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing large language models (LLMs) exhibit saturation on mainstream benchmarks, limiting their ability to discriminate fine-grained differences in instruction following—particularly regarding long-horizon step consistency and objectively verifiable computational processes. Method: We propose MCBench, the first instruction-following benchmark specifically designed for computing string-matching NLP metrics, featuring code-verifiable, reference-implementation-based automated evaluation. Its core innovation is a deterministic, fine-grained assessment framework that quantifies instruction comprehension, intermediate result consistency, and numerical computation accuracy. Through multi-version task design and execution trace comparison, MCBench enhances evaluation discriminability. Contribution/Results: Experiments demonstrate that MCBench effectively differentiates capability gaps among state-of-the-art LLMs—including GPT-4, Claude, and Qwen—in complex instruction execution. It establishes a high-fidelity, reproducible, and scalable evaluation paradigm for instruction-following research.

Technology Category

Application Category

📝 Abstract
Recent frontier-level LLMs have saturated many previously difficult benchmarks, leaving little room for further differentiation. This progress highlights the need for challenging benchmarks that provide objective verification. In this paper, we introduce MCBench, a benchmark designed to evaluate whether LLMs can execute string-matching NLP metrics by strictly following step-by-step instructions. Unlike prior benchmarks that depend on subjective judgments or general reasoning, MCBench offers an objective, deterministic and codeverifiable evaluation. This setup allows us to systematically test whether LLMs can maintain accurate step-by-step execution, including instruction adherence, numerical computation, and long-range consistency in handling intermediate results. To ensure objective evaluation of these abilities, we provide a parallel reference code that can evaluate the accuracy of LLM output. We provide three evaluative metrics and three benchmark variants designed to measure the detailed instruction understanding capability of LLMs. Our analyses show that MCBench serves as an effective and objective tool for evaluating the capabilities of cutting-edge LLMs.
Problem

Research questions and friction points this paper is trying to address.

Evaluates LLMs' ability to execute string-matching NLP metrics via step-by-step instructions
Provides objective verification through code-verifiable deterministic evaluation methods
Tests instruction adherence, numerical computation, and long-range consistency in execution
Innovation

Methods, ideas, or system contributions that make the work stand out.

Code-verifiable benchmark for instruction following
Parallel reference code for objective evaluation
String-matching metrics with step-by-step execution
🔎 Similar Papers
No similar papers found.