How to Evaluate Medical AI

📅 2025-09-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the unreliability of conventional diagnostic metrics (e.g., precision, recall) in medical AI evaluation due to inter-expert variability, and the limited interpretability of existing agreement measures (e.g., Cohen’s Kappa). We propose a clinically oriented evaluation framework grounded in multi-expert free-text diagnoses. It introduces Relative Precision and Recall Against Diagnoses (RPAD/RRAD), normalized by observed inter-expert disagreement, and integrates an LLM-driven free-text diagnostic alignment algorithm with automated consistency assessment. Evaluated on 360 real-world clinical dialogues, our framework reveals that top-performing models (e.g., DeepSeek-V3) achieve diagnostic consistency comparable to physician consensus, with 98% accuracy in free-text diagnostic matching. Crucially, we provide the first quantitative evidence that inter-expert variability substantially exceeds human–AI disagreement—establishing a more stable, interpretable, and annotation-free paradigm for assessing clinical reliability of AI diagnostic systems.

Technology Category

Application Category

📝 Abstract
The integration of artificial intelligence (AI) into medical diagnostic workflows requires robust and consistent evaluation methods to ensure reliability, clinical relevance, and the inherent variability in expert judgments. Traditional metrics like precision and recall often fail to account for the inherent variability in expert judgments, leading to inconsistent assessments of AI performance. Inter-rater agreement statistics like Cohen's Kappa are more reliable but they lack interpretability. We introduce Relative Precision and Recall of Algorithmic Diagnostics (RPAD and RRAD) - a new evaluation metrics that compare AI outputs against multiple expert opinions rather than a single reference. By normalizing performance against inter-expert disagreement, these metrics provide a more stable and realistic measure of the quality of predicted diagnosis. In addition to the comprehensive analysis of diagnostic quality measures, our study contains a very important side result. Our evaluation methodology allows us to avoid selecting diagnoses from a limited list when evaluating a given case. Instead, both the models being tested and the examiners verifying them arrive at a free-form diagnosis. In this automated methodology for establishing the identity of free-form clinical diagnoses, a remarkable 98% accuracy becomes attainable. We evaluate our approach using 360 medical dialogues, comparing multiple large language models (LLMs) against a panel of physicians. Large-scale study shows that top-performing models, such as DeepSeek-V3, achieve consistency on par with or exceeding expert consensus. Moreover, we demonstrate that expert judgments exhibit significant variability - often greater than that between AI and humans. This finding underscores the limitations of any absolute metrics and supports the need to adopt relative metrics in medical AI.
Problem

Research questions and friction points this paper is trying to address.

Evaluating AI medical diagnostics with reliable metrics
Addressing variability in expert judgments for AI assessment
Developing relative metrics comparing AI to multiple experts
Innovation

Methods, ideas, or system contributions that make the work stand out.

Relative Precision and Recall metrics
Normalizing against inter-expert disagreement
Free-form diagnosis evaluation methodology
🔎 Similar Papers
No similar papers found.