REASON: Probability map-guided dual-branch fusion framework for gastric content assessment

📅 2025-11-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the low efficiency and poor accuracy of manual segmentation and empirical formulas in ultrasound-based gastric content assessment, this paper proposes a two-stage probabilistic map-guided dual-branch fusion framework. First, a deep learning segmentation model generates an anatomically informed antral probability map to explicitly model and suppress ultrasound artifacts. Second, a dual-branch CNN extracts discriminative features from right-lateral and supine-view ultrasound images, with feature fusion weighted by the probabilistic map to enhance robustness. The method enables end-to-end aspiration risk stratification. Evaluated on a proprietary clinical dataset, it significantly outperforms existing state-of-the-art approaches—achieving a 6.2% absolute improvement in accuracy and an AUC of 0.94. This work provides a robust, accurate, and automated solution for preoperative gastric content evaluation.

Technology Category

Application Category

📝 Abstract
Accurate assessment of gastric content from ultrasound is critical for stratifying aspiration risk at induction of general anesthesia. However, traditional methods rely on manual tracing of gastric antra and empirical formulas, which face significant limitations in both efficiency and accuracy. To address these challenges, a novel two-stage probability map-guided dual-branch fusion framework (REASON) for gastric content assessment is proposed. In stage 1, a segmentation model generates probability maps that suppress artifacts and highlight gastric anatomy. In stage 2, a dual-branch classifier fuses information from two standard views, right lateral decubitus (RLD) and supine (SUP), to improve the discrimination of learned features. Experimental results on a self-collected dataset demonstrate that the proposed framework outperforms current state-of-the-art approaches by a significant margin. This framework shows great promise for automated preoperative aspiration risk assessment, offering a more robust, efficient, and accurate solution for clinical practice.
Problem

Research questions and friction points this paper is trying to address.

Automates gastric content assessment from ultrasound to replace manual methods
Improves accuracy and efficiency in preoperative aspiration risk stratification
Fuses dual-branch classification from different ultrasound views using probability maps
Innovation

Methods, ideas, or system contributions that make the work stand out.

Two-stage framework for gastric content assessment
Probability maps suppress artifacts and highlight anatomy
Dual-branch classifier fuses two standard ultrasound views
🔎 Similar Papers
No similar papers found.
N
Nu-Fnag Xiao
School of Computer Science and Engineering, Hunan University of Science and Technology, Xiangtan, 411199, China
D
De-Xing Huang
State Key Laboratory of Multimodal Artificial Intelligence Systems, Institute of Automation, Chinese Academy of Sciences, Beijing, 100190, China
L
Le-Tian Wang
Department of Anesthesiology, Huashan Hospital, Fudan University, Shanghai, 200040, China
Mei-Jiang Gui
Mei-Jiang Gui
Institute of Automation, Chinese Academy of Sciences
Surgical RobotTactile Perception
Q
Qi Fu
School of Computer Science and Engineering, Hunan University of Science and Technology, Xiangtan, 411199, China
Xiao-Liang Xie
Xiao-Liang Xie
Chinese Academy of Sciences
Robotic surgery
S
Shi-Qi Liu
State Key Laboratory of Multimodal Artificial Intelligence Systems, Institute of Automation, Chinese Academy of Sciences, Beijing, 100190, China
S
Shuangyi Wang
State Key Laboratory of Multimodal Artificial Intelligence Systems, Institute of Automation, Chinese Academy of Sciences, Beijing, 100190, China
Zeng-Guang Hou
Zeng-Guang Hou
Professor and Deputy Director, SKLMCCS, Institute of Automation, Chinese Academy of Sciences
Computational IntelligenceRoboticsMedical RobotsIntelligent Systems
Y
Ying-Wei Wang
Department of Anesthesiology, Huashan Hospital, Fudan University, Shanghai, 200040, China
Xiao-Hu Zhou
Xiao-Hu Zhou
Institute of Automation, Chinese Academy of Sciences
Medical roboticsImage analysisDeep learning