🤖 AI Summary
In LLM evaluation,裁判 classifiers—such as LLMs or human annotators—are commonly assessed using metrics like accuracy, precision, or F1-score; however, these are susceptible to class imbalance and biases in positive-class definition, leading to distorted model comparisons.
Method: We propose Youden’s J statistic—and its linear equivalent, balanced accuracy—as the core evaluation metric for裁判 classifiers, systematically introducing it into LLM evaluation for the first time. We rigorously validate its mathematical robustness via theoretical derivation, Monte Carlo simulation, and empirical analysis.
Contribution/Results: Balanced accuracy demonstrates superior stability in identifying the truly optimal裁判 model compared to conventional metrics, especially under positive-class ambiguity and severe class imbalance. It significantly enhances fairness, robustness, and cross-model comparability of LLM evaluations. Our work establishes a novel, principled paradigm for trustworthy LLM assessment grounded in statistically sound classification evaluation.
📝 Abstract
Rigorous evaluation of large language models (LLMs) relies on comparing models by the prevalence of desirable or undesirable behaviors, such as task pass rates or policy violations. These prevalence estimates are produced by a classifier, either an LLM-as-a-judge or human annotators, making the choice of classifier central to trustworthy evaluation. Common metrics used for this choice, such as Accuracy, Precision, and F1, are sensitive to class imbalance and to arbitrary choices of positive class, and can favor judges that distort prevalence estimates. We show that Youden's $J$ statistic is theoretically aligned with choosing the best judge to compare models, and that Balanced Accuracy is an equivalent linear transformation of $J$. Through both analytical arguments and empirical examples and simulations, we demonstrate how selecting judges using Balanced Accuracy leads to better, more robust classifier selection.