🤖 AI Summary
Many medical imaging AI studies claim “novel methods outperform existing ones” without statistically rigorous validation, undermining reliability. Method: We systematically evaluate benchmarking practices across representative papers, introducing— for the first time—a Bayesian framework to quantify the probability of “false superiority” (i.e., incorrect performance ranking), augmented by empirical modeling of model congruence to calibrate ranking confidence. Our approach integrates Bayesian inference, frequentist significance testing, cross-study meta-analysis, and congruence-aware modeling. Contribution/Results: We find that 86% of classification and 53% of segmentation papers report claimed improvements with >5% false-superiority risk; over 80% of such claims lack statistically robust support. This reveals fundamental flaws in ignoring uncertainty quantification and model congruence in AI evaluation. Our work establishes a reproducible, statistically grounded paradigm for trustworthy AI assessment in medical imaging.
📝 Abstract
Performance comparisons are fundamental in medical imaging Artificial Intelligence (AI) research, often driving claims of superiority based on relative improvements in common performance metrics. However, such claims frequently rely solely on empirical mean performance. In this paper, we investigate whether newly proposed methods genuinely outperform the state of the art by analyzing a representative cohort of medical imaging papers. We quantify the probability of false claims based on a Bayesian approach that leverages reported results alongside empirically estimated model congruence to estimate whether the relative ranking of methods is likely to have occurred by chance. According to our results, the majority (>80%) of papers claims outperformance when introducing a new method. Our analysis further revealed a high probability (>5%) of false outperformance claims in 86% of classification papers and 53% of segmentation papers. These findings highlight a critical flaw in current benchmarking practices: claims of outperformance in medical imaging AI are frequently unsubstantiated, posing a risk of misdirecting future research efforts.