🤖 AI Summary
Traditional F1 treats semantically similar yet non-identical label predictions as entirely incorrect in multi-label classification, leading to evaluation distortion in subjective or boundary-ambiguous scenarios. To address this, we propose Semantic F1: a semantics-aware F1 metric that defines soft precision and soft recall based on a label similarity matrix. It employs a two-stage computation framework enabling pairwise comparison across arbitrarily sized label sets, avoiding forced alignment of dissimilar labels and eliminating reliance on rigid ontologies. The method ensures fairness, interpretability, and ecological validity. Evaluated on synthetic data and real-world tasks with substantial annotator disagreement—such as sentiment analysis and fine-grained image annotation—Semantic F1 demonstrates significantly improved alignment with human judgment compared to conventional metrics, thereby enhancing assessment reasonableness and reliability.
📝 Abstract
We propose Semantic F1 Scores, novel evaluation metrics for subjective or fuzzy multi-label classification that quantify semantic relatedness between predicted and gold labels. Unlike the conventional F1 metrics that treat semantically related predictions as complete failures, Semantic F1 incorporates a label similarity matrix to compute soft precision-like and recall-like scores, from which the Semantic F1 scores are derived. Unlike existing similarity-based metrics, our novel two-step precision-recall formulation enables the comparison of label sets of arbitrary sizes without discarding labels or forcing matches between dissimilar labels. By granting partial credit for semantically related but nonidentical labels, Semantic F1 better reflects the realities of domains marked by human disagreement or fuzzy category boundaries. In this way, it provides fairer evaluations: it recognizes that categories overlap, that annotators disagree, and that downstream decisions based on similar predictions lead to similar outcomes. Through theoretical justification and extensive empirical validation on synthetic and real data, we show that Semantic F1 demonstrates greater interpretability and ecological validity. Because it requires only a domain-appropriate similarity matrix, which is robust to misspecification, and not a rigid ontology, it is applicable across tasks and modalities.