๐ค AI Summary
Existing meta-evaluation protocols for automatic metrics rely predominantly on global correlation, overlooking substantial variations in their local accuracy across specific contextsโsuch as particular models or task subsets.
Method: We propose a context-aware local meta-evaluation framework that systematically investigates metric performance heterogeneity. Our approach integrates multi-task empirical analysis (machine translation, automatic speech recognition, and ranking), local correlation modeling, and cross-context stability quantification.
Contribution/Results: Experiments across three core NLP tasks reveal pronounced context-dependent disparities in metric effectiveness: no single metric consistently dominates locally, and global rankings exhibit poor transferability to specific contexts. This work challenges the conventional global meta-evaluation paradigm and establishes a more reliable, scenario-adaptive foundation for assessing automatic evaluation metrics in NLP systems.
๐ Abstract
Meta-evaluation of automatic evaluation metrics -- assessing evaluation metrics themselves -- is crucial for accurately benchmarking natural language processing systems and has implications for scientific inquiry, production model development, and policy enforcement. While existing approaches to metric meta-evaluation focus on general statements about the absolute and relative quality of metrics across arbitrary system outputs, in practice, metrics are applied in highly contextual settings, often measuring the performance for a highly constrained set of system outputs. For example, we may only be interested in evaluating a specific model or class of models. We introduce a method for contextual metric meta-evaluation by comparing the local metric accuracy of evaluation metrics. Across translation, speech recognition, and ranking tasks, we demonstrate that the local metric accuracies vary both in absolute value and relative effectiveness as we shift across evaluation contexts. This observed variation highlights the importance of adopting context-specific metric evaluations over global ones.