A Deep Learning-Driven Framework for Inhalation Injury Grading Using Bronchoscopy Images

📅 2025-05-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Traditional inhalation injury grading systems (e.g., AIS) rely on subjective bronchoscopic assessment, exhibiting low reliability, validity, and weak prognostic correlation. To address this, we propose an end-to-end deep learning framework that enables objective, bronchoscopy image–driven grading, using duration of mechanical ventilation as the clinical gold standard. Methodologically, we introduce an enhanced StarGAN architecture integrating Patch Loss and SSIM Loss to generate high-fidelity, anatomically accurate, and clinically interpretable synthetic bronchoscopic images—marking the first such application in this domain and substantially mitigating small-sample generalization challenges. Coupled with a Swin Transformer–based classifier and Fréchet Inception Distance (FID)–based quantitative evaluation, our model achieves 77.78% classification accuracy (a 11.11% improvement) and sets a new state-of-the-art FID of 30.06. Blind expert review by burn surgeons confirms faithful preservation of bronchial anatomy and color characteristics in generated images.

Technology Category

Application Category

📝 Abstract
Inhalation injuries face a challenge in clinical diagnosis and grading due to the limitations of traditional methods, such as Abbreviated Injury Score (AIS), which rely on subjective assessments and show weak correlations with clinical outcomes. This study introduces a novel deep learning-based framework for grading inhalation injuries using bronchoscopy images with the duration of mechanical ventilation as an objective metric. To address the scarcity of medical imaging data, we propose enhanced StarGAN, a generative model that integrates Patch Loss and SSIM Loss to improve synthetic images' quality and clinical relevance. The augmented dataset generated by enhanced StarGAN significantly improved classification performance when evaluated using the Swin Transformer, achieving an accuracy of 77.78%, an 11.11% improvement over the original dataset. Image quality was assessed using the Fr'echet Inception Distance (FID), where Enhanced StarGAN achieved the lowest FID of 30.06, outperforming baseline models. Burn surgeons confirmed the realism and clinical relevance of the generated images, particularly the preservation of bronchial structures and color distribution. These results highlight the potential of enhanced StarGAN in addressing data limitations and improving classification accuracy for inhalation injury grading.
Problem

Research questions and friction points this paper is trying to address.

Develop deep learning framework for inhalation injury grading
Enhance synthetic bronchoscopy images with improved quality
Improve classification accuracy using augmented dataset
Innovation

Methods, ideas, or system contributions that make the work stand out.

Deep learning framework for inhalation injury grading
Enhanced StarGAN with Patch and SSIM Loss
Swin Transformer improves classification accuracy
🔎 Similar Papers
No similar papers found.
Y
Yifan Li
Department of Electrical and Computer Engineering, Texas Tech University, Lubbock, USA
A
Alan W Pang
Department of Surgery, Texas Tech University Health and Science Center, Lubbock, Texas 79410, USA
Jo Woon Chong
Jo Woon Chong
Sungkyunkwan University (SKKU)
Biomedical sensorsNon-invasive health monitoringPersonal health informaticsbiomedical signal