π€ AI Summary
Medical AI systems often exhibit poor uncertainty calibration and low predictive reliability under few-shot learning scenarios. To address this, we propose Kernel-Enhanced Bayesian Monte Carlo Dropoutβa novel method that integrates kernel-based modeling with the Dropout mechanism, enabling prior knowledge injection and high-fidelity Bayesian approximation without retraining. This approach significantly improves uncertainty quantification in data-scarce settings. Evaluated on multiple public medical benchmarks, it reduces Expected Calibration Error (ECE) by 37% while simultaneously improving AUROC. Even with only 100 training samples, the method maintains over 92% prediction reliability. Designed for seamless integration, it can be directly embedded into existing clinical AI pipelines as a plug-and-play module, supporting interpretable and trustworthy decision support. The key contributions include: (i) the first fusion of kernel methods with Monte Carlo Dropout for efficient Bayesian inference; (ii) zero-shot prior incorporation without architectural or training modifications; and (iii) state-of-the-art calibration and robustness in low-data regimes.
π Abstract
AI-driven medical predictions with trustworthy confidence are essential for ensuring the responsible use of AI in healthcare applications. The growing capabilities of AI raise questions about their trustworthiness in healthcare, particularly due to opaque decision-making and limited data availability. This paper proposes a novel approach to address these challenges, introducing a Bayesian Monte Carlo Dropout model with kernel modelling. Our model is designed to enhance reliability on small medical datasets, a crucial barrier to the wider adoption of AI in healthcare. This model leverages existing language models for improved effectiveness and seamlessly integrates with current workflows. Extensive evaluations of public medical datasets showcase our model's superior performance across diverse tasks. We demonstrate significant improvements in reliability, even with limited data, offering a promising step towards building trust in AI-driven medical predictions and unlocking its potential to improve patient care.