๐ค AI Summary
To address the challenge of quantifying uncertainty in expert decision-making, this paper proposes a novel prior inference framework that integrates deep learning with Bayesian inversion. The method directly models expert reasoning from high-dimensional unstructured dataโsuch as medical images and clinical textโand automatically extracts interpretable uncertainty representations. It employs CNNs, RNNs, or Transformers to encode raw decision evidence, followed by distributional fitting and Bayesian inverse problem solving to map expert behavior onto explainable probabilistic distributions. Evaluated on colorectal cancer risk assessment, the approach significantly improves both the quality and clinical interpretability of elicited prior distributions, overcoming the limitations of conventional methods reliant solely on tabular data. This work establishes a new paradigm for formalizing expert knowledge and enabling trustworthy AI-assisted diagnosis.
๐ Abstract
Recent work [ 14 ] has introduced a method for prior elicitation that utilizes records of expert decisions to infer a prior distribution. While this method provides a promising approach to eliciting expert uncertainty, it has only been demonstrated using tabular data, which may not entirely represent the information used by experts to make decisions. In this paper, we demonstrate how analysts can adopt a deep learning approach to utilize the method proposed in [14 ] with the actual information experts use. We provide an overview of deep learning models that can effectively model expert decision-making to elicit distributions that capture expert uncertainty and present an example examining the risk of colon cancer to show in detail how these models can be used.