Towards a Diagnostic and Predictive Evaluation Methodology for Sequence Labeling Tasks

πŸ“… 2026-02-13
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF

Technology Category

Application Category

πŸ“ Abstract
Standard evaluation in NLP typically indicates that system A is better on average than system B, but it provides little info on how to improve performance and, what is worse, it should not come as a surprise if B ends up being better than A on outside data. We propose an evaluation methodology for sequence labeling tasks grounded on error analysis that provides both quantitative and qualitative information on where systems must be improved and predicts how models will perform on a different distribution. The key is to create test sets that, contrary to common practice, do not rely on gathering large amounts of real-world in-distribution scraped data, but consists in handcrafting a small set of linguistically motivated examples that exhaustively cover the range of span attributes (such as shape, length, casing, sentence position, etc.) a system may encounter in the wild. We demonstrate this methodology on a benchmark for anglicism identification in Spanish. Our methodology provides results that are diagnostic (because they help identify systematic weaknesses in performance), actionable (because they can inform which model is better suited for a given scenario) and predictive: our method predicts model performance on external datasets with a median correlation of 0.85.
Problem

Research questions and friction points this paper is trying to address.

evaluation methodology
sequence labeling
out-of-distribution prediction
error analysis
model diagnostics
Innovation

Methods, ideas, or system contributions that make the work stand out.

sequence labeling
error analysis
diagnostic evaluation
predictive evaluation
linguistically motivated test sets
πŸ”Ž Similar Papers
No similar papers found.