🤖 AI Summary
To address the low efficiency and poor accuracy of manual segmentation and empirical formulas in ultrasound-based gastric content assessment, this paper proposes a two-stage probabilistic map-guided dual-branch fusion framework. First, a deep learning segmentation model generates an anatomically informed antral probability map to explicitly model and suppress ultrasound artifacts. Second, a dual-branch CNN extracts discriminative features from right-lateral and supine-view ultrasound images, with feature fusion weighted by the probabilistic map to enhance robustness. The method enables end-to-end aspiration risk stratification. Evaluated on a proprietary clinical dataset, it significantly outperforms existing state-of-the-art approaches—achieving a 6.2% absolute improvement in accuracy and an AUC of 0.94. This work provides a robust, accurate, and automated solution for preoperative gastric content evaluation.
📝 Abstract
Accurate assessment of gastric content from ultrasound is critical for stratifying aspiration risk at induction of general anesthesia. However, traditional methods rely on manual tracing of gastric antra and empirical formulas, which face significant limitations in both efficiency and accuracy. To address these challenges, a novel two-stage probability map-guided dual-branch fusion framework (REASON) for gastric content assessment is proposed. In stage 1, a segmentation model generates probability maps that suppress artifacts and highlight gastric anatomy. In stage 2, a dual-branch classifier fuses information from two standard views, right lateral decubitus (RLD) and supine (SUP), to improve the discrimination of learned features. Experimental results on a self-collected dataset demonstrate that the proposed framework outperforms current state-of-the-art approaches by a significant margin. This framework shows great promise for automated preoperative aspiration risk assessment, offering a more robust, efficient, and accurate solution for clinical practice.