Towards reliable use of artificial intelligence to classify otitis media using otoscopic images: Addressing bias and improving data quality

📅 2025-07-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
AI models for otoscopic image classification of otitis media exhibit poor generalizability and limited clinical applicability due to dataset bias and inconsistent image quality. Method: We systematically evaluated multiple public datasets and conducted two types of counterfactual experiments, integrating AUC analysis, logistic regression modeling, and quantitative HSV color-space analysis to characterize how non-clinical artifacts—particularly device-induced chromatic biases—interfere with model decision-making. Contribution/Results: Significant biases were identified in the Chilean and Ohio (USA) datasets, where models over-relied on spurious color cues; in contrast, the Turkish dataset yielded more robust, clinically relevant feature dependence. Critically, models trained solely on artifact features achieved internal and external AUCs >0.87, demonstrating that data quality—not algorithmic sophistication—is the primary determinant of generalization. The study underscores that standardized imaging protocols and diverse, representative data acquisition are essential for enhancing the clinical reliability of AI in otology.

Technology Category

Application Category

📝 Abstract
Ear disease contributes significantly to global hearing loss, with recurrent otitis media being a primary preventable cause in children, impacting development. Artificial intelligence (AI) offers promise for early diagnosis via otoscopic image analysis, but dataset biases and inconsistencies limit model generalizability and reliability. This retrospective study systematically evaluated three public otoscopic image datasets (Chile; Ohio, USA; Türkiye) using quantitative and qualitative methods. Two counterfactual experiments were performed: (1) obscuring clinically relevant features to assess model reliance on non-clinical artifacts, and (2) evaluating the impact of hue, saturation, and value on diagnostic outcomes. Quantitative analysis revealed significant biases in the Chile and Ohio, USA datasets. Counterfactual Experiment I found high internal performance (AUC > 0.90) but poor external generalization, because of dataset-specific artifacts. The Türkiye dataset had fewer biases, with AUC decreasing from 0.86 to 0.65 as masking increased, suggesting higher reliance on clinically meaningful features. Counterfactual Experiment II identified common artifacts in the Chile and Ohio, USA datasets. A logistic regression model trained on clinically irrelevant features from the Chile dataset achieved high internal (AUC = 0.89) and external (Ohio, USA: AUC = 0.87) performance. Qualitative analysis identified redundancy in all the datasets and stylistic biases in the Ohio, USA dataset that correlated with clinical outcomes. In summary, dataset biases significantly compromise reliability and generalizability of AI-based otoscopic diagnostic models. Addressing these biases through standardized imaging protocols, diverse dataset inclusion, and improved labeling methods is crucial for developing robust AI solutions, improving high-quality healthcare access, and enhancing diagnostic accuracy.
Problem

Research questions and friction points this paper is trying to address.

Addressing dataset biases in AI-based otitis media diagnosis
Improving generalizability of otoscopic image classification models
Enhancing reliability through standardized imaging and diverse datasets
Innovation

Methods, ideas, or system contributions that make the work stand out.

Counterfactual experiments to assess model biases
Quantitative and qualitative dataset bias analysis
Standardized imaging protocols for reliable AI
🔎 Similar Papers
No similar papers found.
Y
Yixi Xu
AI for Good Lab, Microsoft, Redmond, Washington, USA
A
Al-Rahim Habib
Sydney Medical School, Faculty of Medicine and Health, University of Sydney, Camperdown, New South Wales, Australia; Department of Otolaryngology – Head and Neck Surgery, Queensland Children’s Hospital, South Brisbane, Queensland, Australia
G
Graeme Crossland
Department of Otolaryngology – Head and Neck Surgery, Royal Darwin Hospital, Tiwi, Northern Territory, Australia
H
Hemi Patel
Department of Otolaryngology – Head and Neck Surgery, Royal Darwin Hospital, Tiwi, Northern Territory, Australia
C
Chris Perry
University of Queensland Medical School, Brisbane, Queensland, Australia
K
Kris Bock
Azure FastTrack Engineering, Microsoft, Brisbane, Queensland, Australia
T
Tony Lian
Sydney Medical School, Faculty of Medicine and Health, University of Sydney, Camperdown, New South Wales, Australia
W
William B. Weeks
AI for Good Lab, Microsoft, Redmond, Washington, USA
Rahul Dodhia
Rahul Dodhia
Deputy Director, AI for Good Research Lab, Microsoft
generative aiartificial intelligencestatisticscomputer visiongeospatial imagery
J
Juan Lavista Ferres
AI for Good Lab, Microsoft, Redmond, Washington, USA
Narinder Pal Singh
Narinder Pal Singh
A/Professor & Chief of Otolaryngology, Head & Neck Surgery, Westmead Hospital/ University of Sydney
RhinologyAnterior Skull Base SurgeryOSAAI (Artificial intelligence)CFD