π€ AI Summary
Wide-field fluorescence microscopy suffers from poor axial resolution and strong out-of-focus background, limiting its utility in densely labeled specimens. To address this, we propose ET2dNetβa physics-guided hybrid deep network that enables end-to-end reconstruction of optical sectioning images with near-total internal reflection fluorescence (TIRF) quality directly from a single wide-field frame, without hardware modification. Leveraging EPI-TIRF cross-modal learning, ET2dNet integrates point spread function modeling, paired supervised training, few-shot adaptation, and knowledge distillation. Extended to three dimensions as ET3dNet, it further suppresses out-of-focus artifacts. Validated on cellular and tissue samples, our method significantly reduces background, improves axial resolution by approximately twofold, and remains compatible with conventional deconvolution to enhance lateral resolution. This work presents the first end-to-end mapping from single-frame wide-field to TIRF-level axial super-resolution, demonstrating strong generalizability and clinical applicability.
π Abstract
The resolving ability of wide-field fluorescence microscopy is fundamentally limited by out-of-focus background owing to its low axial resolution, particularly for densely labeled biological samples. To address this, we developed ET2dNet, a deep learning-based EPI-TIRF cross-modality network that achieves TIRF-comparable background subtraction and axial super-resolution from a single wide-field image without requiring hardware modifications. The model employs a physics-informed hybrid architecture, synergizing supervised learning with registered EPI-TIRF image pairs and self-supervised physical modeling via convolution with the point spread function. This framework ensures exceptional generalization across microscope objectives, enabling few-shot adaptation to new imaging setups. Rigorous validation on cellular and tissue samples confirms ET2dNet's superiority in background suppression and axial resolution enhancement, while maintaining compatibility with deconvolution techniques for lateral resolution improvement. Furthermore, by extending this paradigm through knowledge distillation, we developed ET3dNet, a dedicated three-dimensional reconstruction network that produces artifact-reduced volumetric results. ET3dNet effectively removes out-of-focus background signals even when the input image stack lacks the source of background. This framework makes axial super-resolution imaging more accessible by providing an easy-to-deploy algorithm that avoids additional hardware costs and complexity, showing great potential for live cell studies and clinical histopathology.