GAUDA: Generative Adaptive Uncertainty-guided Diffusion-based Augmentation for Surgical Segmentation

📅 2025-01-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address data scarcity, poor generalizability, and the limitations of conventional data augmentation—constrained by ethical considerations and annotation burdens—in surgical image segmentation, this paper proposes an uncertainty-guided generative adaptive augmentation method. Our approach jointly models image and mask representations within a semantically consistent latent space, and—novelty for the first time—couples Bayesian neural network-based epistemic uncertainty estimation with latent diffusion models (LDMs) to enable online, on-demand, compact, and semantically coherent paired sample synthesis targeted at high-uncertainty classes. Evaluated on CaDISv2 and CholecSeg8k, our method achieves average absolute IoU improvements of 1.6% and 1.5%, respectively, significantly outperforming existing generative augmentation approaches. Crucially, robust performance gains are attained using only a small number of targeted synthetic samples.

Technology Category

Application Category

📝 Abstract
Augmentation by generative modelling yields a promising alternative to the accumulation of surgical data, where ethical, organisational and regulatory aspects must be considered. Yet, the joint synthesis of (image, mask) pairs for segmentation, a major application in surgery, is rather unexplored. We propose to learn semantically comprehensive yet compact latent representations of the (image, mask) space, which we jointly model with a Latent Diffusion Model. We show that our approach can effectively synthesise unseen high-quality paired segmentation data of remarkable semantic coherence. Generative augmentation is typically applied pre-training by synthesising a fixed number of additional training samples to improve downstream task models. To enhance this approach, we further propose Generative Adaptive Uncertainty-guided Diffusion-based Augmentation (GAUDA), leveraging the epistemic uncertainty of a Bayesian downstream model for targeted online synthesis. We condition the generative model on classes with high estimated uncertainty during training to produce additional unseen samples for these classes. By adaptively utilising the generative model online, we can minimise the number of additional training samples and centre them around the currently most uncertain parts of the data distribution. GAUDA effectively improves downstream segmentation results over comparable methods by an average absolute IoU of 1.6% on CaDISv2 and 1.5% on CholecSeg8k, two prominent surgical datasets for semantic segmentation.
Problem

Research questions and friction points this paper is trying to address.

surgical image segmentation
data augmentation challenges
model generalization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Generative Uncertainty-guided Diffusion
Surgical Image Segmentation
Adaptive Enhancement
🔎 Similar Papers
No similar papers found.