🤖 AI Summary
AI models for otoscopic image classification of otitis media exhibit poor generalizability and limited clinical applicability due to dataset bias and inconsistent image quality.
Method: We systematically evaluated multiple public datasets and conducted two types of counterfactual experiments, integrating AUC analysis, logistic regression modeling, and quantitative HSV color-space analysis to characterize how non-clinical artifacts—particularly device-induced chromatic biases—interfere with model decision-making.
Contribution/Results: Significant biases were identified in the Chilean and Ohio (USA) datasets, where models over-relied on spurious color cues; in contrast, the Turkish dataset yielded more robust, clinically relevant feature dependence. Critically, models trained solely on artifact features achieved internal and external AUCs >0.87, demonstrating that data quality—not algorithmic sophistication—is the primary determinant of generalization. The study underscores that standardized imaging protocols and diverse, representative data acquisition are essential for enhancing the clinical reliability of AI in otology.
📝 Abstract
Ear disease contributes significantly to global hearing loss, with recurrent otitis media being a primary preventable cause in children, impacting development. Artificial intelligence (AI) offers promise for early diagnosis via otoscopic image analysis, but dataset biases and inconsistencies limit model generalizability and reliability. This retrospective study systematically evaluated three public otoscopic image datasets (Chile; Ohio, USA; Türkiye) using quantitative and qualitative methods. Two counterfactual experiments were performed: (1) obscuring clinically relevant features to assess model reliance on non-clinical artifacts, and (2) evaluating the impact of hue, saturation, and value on diagnostic outcomes. Quantitative analysis revealed significant biases in the Chile and Ohio, USA datasets. Counterfactual Experiment I found high internal performance (AUC > 0.90) but poor external generalization, because of dataset-specific artifacts. The Türkiye dataset had fewer biases, with AUC decreasing from 0.86 to 0.65 as masking increased, suggesting higher reliance on clinically meaningful features. Counterfactual Experiment II identified common artifacts in the Chile and Ohio, USA datasets. A logistic regression model trained on clinically irrelevant features from the Chile dataset achieved high internal (AUC = 0.89) and external (Ohio, USA: AUC = 0.87) performance. Qualitative analysis identified redundancy in all the datasets and stylistic biases in the Ohio, USA dataset that correlated with clinical outcomes. In summary, dataset biases significantly compromise reliability and generalizability of AI-based otoscopic diagnostic models. Addressing these biases through standardized imaging protocols, diverse dataset inclusion, and improved labeling methods is crucial for developing robust AI solutions, improving high-quality healthcare access, and enhancing diagnostic accuracy.