🤖 AI Summary
In fluorescence live-cell imaging, simultaneous visualization of multiple subcellular structures is hindered by spectral overlap and limited labeling channels. To address this, we propose a spatiotemporal generative adversarial network (STGAN)-based video-to-video translation method—first applied to microscopic video domain translation—explicitly modeling the spatiotemporal dynamics of subcellular structures. Our approach synthesizes high-fidelity multichannel videos containing ≥5 distinct subcellular structures from a single-channel widefield fluorescence input, effectively mitigating spectral crosstalk and circumventing conventional labeling constraints. Quantitative and qualitative evaluations demonstrate superior structural fidelity, temporal consistency, and cross-structure disentanglement compared to state-of-the-art methods. This work establishes a novel label-free paradigm for multiplexed, dynamic analysis of live cells.
📝 Abstract
In spite of being a valuable tool to simultaneously visualize multiple types of subcellular structures using spectrally distinct fluorescent labels, a standard fluoresce microscope is only able to identify a few microscopic objects; such a limit is largely imposed by the number of fluorescent labels available to the sample. In order to simultaneously visualize more objects, in this paper, we propose to use video-to-video translation that mimics the development process of microscopic objects. In essence, we use a microscopy video-to-video translation framework namely Spatial-temporal Generative Adversarial Network (STGAN) to reveal the spatial and temporal relationships between the microscopic objects, after which a microscopy video of one object can be translated to another object in a different domain. The experimental results confirm that the proposed STGAN is effective in microscopy video-to-video translation that mitigates the spectral conflicts caused by the limited fluorescent labels, allowing multiple microscopic objects be simultaneously visualized.