🤖 AI Summary
In optical-sectioning structured illumination microscopy (OS-SI), accelerated acquisition introduces persistent artifacts, while the absence of ground-truth optical-sectioned data hinders supervised deep learning. To address this, we propose a synthetic-data-driven deep denoising framework. We innovatively generate realistic training pairs incorporating physically accurate artifact models—bypassing the need for experimentally acquired clean ground truth—and thereby overcome the supervision bottleneck. Our architecture integrates an asymmetric denoising autoencoder (DAE) and a U-Net: the DAE captures global structural priors, while the U-Net refines local details. Experiments demonstrate their complementary artifact suppression capabilities, yielding substantial improvements in image sharpness and signal-to-noise ratio. Moreover, the method streamlines OS-SI post-processing. Results validate both the efficacy and generalizability of synthetic-data-driven denoising for this imaging modality.
📝 Abstract
Structured illumination (SI) enhances image resolution and contrast by projecting patterned light onto a sample. In two-phase optical-sectioning SI (OS-SI), reduced acquisition time introduces residual artifacts that conventional denoising struggles to suppress. Deep learning offers an alternative to traditional methods; however, supervised training is limited by the lack of clean, optically sectioned ground-truth data. We investigate encoder-decoder networks for artifact reduction in two-phase OS-SI, using synthetic training pairs formed by applying real artifact fields to synthetic images. An asymmetrical denoising autoencoder (DAE) and a U-Net are trained on the synthetic data, then evaluated on real OS-SI images. Both networks improve image clarity, with each excelling against different artifact types. These results demonstrate that synthetic training enables supervised denoising of OS-SI images and highlight the potential of encoder-decoder networks to streamline reconstruction workflows.