π€ AI Summary
Existing individual tendency learning (ITL) methods lack a unified, quantifiable framework to rigorously assess whether they genuinely capture annotator-level behavioral differences and yield behaviorally plausible explanations. This paper introduces the first evaluation framework for ITL in multi-annotator settings. Its core contributions are: (1) the Difference-aware Inter-annotator Consistency (DIC) metric, which quantifies a modelβs ability to capture heterogeneity in annotator behavior; and (2) the Behavior-aligned Explainability (BAE) metric, the first to jointly evaluate the alignment between ITL-generated explanations and empirically observed annotator behavior. The framework integrates multidimensional scaling, predictive similarity structure comparison, and explanation verification, grounded in real-world annotation data. Extensive experiments demonstrate that DIC and BAE effectively distinguish state-of-the-art ITL methods in both tendency modeling fidelity and explanation plausibility, establishing a reliable, behaviorally grounded benchmark for future ITL research.
π Abstract
Recent works have emerged in multi-annotator learning that shift focus from Consensus-oriented Learning (CoL), which aggregates multiple annotations into a single ground-truth prediction, to Individual Tendency Learning (ITL), which models annotator-specific labeling behavior patterns (i.e., tendency) to provide explanation analysis for understanding annotator decisions. However, no evaluation framework currently exists to assess whether ITL methods truly capture individual tendencies and provide meaningful behavioral explanations. To address this gap, we propose the first unified evaluation framework with two novel metrics: (1) Difference of Inter-annotator Consistency (DIC) quantifies how well models capture annotator tendencies by comparing predicted inter-annotator similarity structures with ground-truth; (2) Behavior Alignment Explainability (BAE) evaluates how well model explanations reflect annotator behavior and decision relevance by aligning explainability-derived with ground-truth labeling similarity structures via Multidimensional Scaling (MDS). Extensive experiments validate the effectiveness of our proposed evaluation framework.