GRN+: A Simplified Generative Reinforcement Network for Tissue Layer Analysis in 3D Ultrasound Images for Chronic Low-back Pain

πŸ“… 2025-03-25
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
To address the time-consuming, annotation-intensive manual segmentation of multi-layer soft tissues in 3D ultrasound images for chronic low back pain analysis, this paper proposes a low-annotation-dependency semi-supervised segmentation method. The approach introduces a novel Segmentation-Guided Enhancement (SGE) mechanism and a two-stage backpropagation strategy, achieving significant performance gains using only 5% labeled dataβ€”without requiring any unlabeled data during training. It integrates a ResNet-based generator with a U-Net-based segmenter, where segmentation loss gradients dynamically regulate the generation process, and dual-stage gradient updates ensure training stability. Evaluated on 69 real-world 3D ultrasound volumes, the method substantially outperforms existing semi-supervised approaches in Dice coefficient; even under full supervision, it improves Dice by 2.16% while reducing computational overhead.

Technology Category

Application Category

πŸ“ Abstract
3D ultrasound delivers high-resolution, real-time images of soft tissues, which is essential for pain research. However, manually distinguishing various tissues for quantitative analysis is labor-intensive. To streamline this process, we developed and validated GRN+, a novel multi-model framework that automates layer segmentation with minimal annotated data. GRN+ combines a ResNet-based generator and a U-Net segmentation model. Through a method called Segmentation-guided Enhancement (SGE), the generator produces new images and matching masks under the guidance of the segmentation model, with its weights adjusted according to the segmentation loss gradient. To prevent gradient explosion and secure stable training, a two-stage backpropagation strategy was implemented: the first stage propagates the segmentation loss through both the generator and segmentation model, while the second stage concentrates on optimizing the segmentation model alone, thereby refining mask prediction using the generated images. Tested on 69 fully annotated 3D ultrasound scans from 29 subjects with six manually labeled tissue layers, GRN+ outperformed all other semi-supervised methods in terms of the Dice coefficient using only 5% labeled data, despite not using unlabeled data for unsupervised training. Additionally, when applied to fully annotated datasets, GRN+ with SGE achieved a 2.16% higher Dice coefficient while incurring lower computational costs compared to other models. Overall, GRN+ provides accurate tissue segmentation while reducing both computational expenses and the dependency on extensive annotations, making it an effective tool for 3D ultrasound analysis in cLBP patients.
Problem

Research questions and friction points this paper is trying to address.

Automates tissue layer segmentation in 3D ultrasound images
Reduces dependency on extensive annotated data for analysis
Improves accuracy and efficiency in chronic low-back pain research
Innovation

Methods, ideas, or system contributions that make the work stand out.

ResNet and U-Net combined multi-model framework
Segmentation-guided Enhancement for image generation
Two-stage backpropagation ensures stable training
Zixue Zeng
Zixue Zeng
PhD in University of Pittsburgh
AI for Med
X
Xiaoyan Zhao
University of Pittsburgh, Department of Bioengineering, Pittsburgh, USA
M
Matthew Cartier
University of Pittsburgh, Department of Mathematics, Pittsburgh, USA
Xin Meng
Xin Meng
University of Pittsburgh
AI and medical imaging
J
J. Pu
University of Pittsburgh, Department of Bioengineering, Pittsburgh, USA; University of Pittsburgh, Department of Radiology, Pittsburgh, USA; University of Pittsburgh, Department of Ophthalmology, Pittsburgh, USA