🤖 AI Summary
This work addresses the challenge of limited annotated data in neuronal segmentation, where conventional data augmentation struggles to produce structurally diverse and realistic 3D samples. To overcome this, the authors propose a diffusion model–based data augmentation framework that synthesizes voxel-level high-fidelity 3D image–label pairs by incorporating a resolution-aware multi-scale conditioning mechanism and electron microscopy (EM) resolution priors. Additionally, a biologically guided mask reshaping module is introduced to enhance the anatomical plausibility of the generated labels. Evaluated under low-annotation settings on the AC3/AC4 datasets, the method substantially improves segmentation performance, achieving relative gains of 32.1% and 30.7% in the ARAND metric, respectively.
📝 Abstract
Neuron segmentation in electron microscopy (EM) aims to reconstruct the complete neuronal connectome; however, current deep learning-based methods are limited by their reliance on large-scale training data and extensive, time-consuming manual annotations. Traditional methods augment the training set through geometric and photometric transformations; however, the generated samples remain highly correlated with the original images and lack structural diversity. To address this limitation, we propose a diffusion-based data augmentation framework capable of generating diverse and structurally plausible image-label pairs for neuron segmentation. Specifically, the framework employs a resolution-aware conditional diffusion model with multi-scale conditioning and EM resolution priors to enable voxel-level image synthesis from 3D masks. It further incorporates a biology-guided mask remodeling module that produces augmented masks with enhanced structural realism. Together, these components effectively enrich the training set and improve segmentation performance. On the AC3 and AC4 datasets under low-annotation regimes, our method improves the ARAND metric by 32.1% and 30.7%, respectively, when combined with two different post-processing methods. Our code is available at https://github.com/HeadLiuYun/NeuroDiff.