π€ AI Summary
Colonoscopy videos are typically stored as sparse keyframes due to storage constraints, undermining temporal consistency in unsupervised domain translation models (e.g., optical-to-virtual colonoscopy). Incorporating temporal modeling via consecutive frames necessitates full model retraining, incurring prohibitive computational overhead. To address this, we propose RT-GANβa lightweight, plug-and-play framework that enables temporal enhancement for any single-frame domain translation model without architectural modification or retraining. RT-GAN introduces a tunable recurrent GAN architecture integrating optical-flow-guided hidden-state updates and temporal feature memory, enabling controllable temporal coherence. Training overhead is reduced fivefold (80% fewer resources), while substantial improvements in temporal coherence are demonstrated on fold segmentation and synthetic video generation tasks. We also release the first open-source colonoscopy temporal dataset, along with code and pretrained models, via the CEP platform.
π Abstract
Fourteen million colonoscopies are performed annually just in the U.S. However, the videos from these colonoscopies are not saved due to storage constraints (each video from a high-definition colonoscope camera can be in tens of gigabytes). Instead, a few relevant individual frames are saved for documentation/reporting purposes and these are the frames on which most current colonoscopy AI models are trained on. While developing new unsupervised domain translation methods for colonoscopy (e.g. to translate between real optical and virtual/CT colonoscopy), it is thus typical to start with approaches that initially work for individual frames without temporal consistency. Once an individual-frame model has been finalized, additional contiguous frames are added with a modified deep learning architecture to train a new model from scratch for temporal consistency. This transition to temporally-consistent deep learning models, however, requires significantly more computational and memory resources for training. In this paper, we present a lightweight solution with a tunable temporal parameter, RT-GAN (Recurrent Temporal GAN), for adding temporal consistency to individual frame-based approaches that reduces training requirements by a factor of 5. We demonstrate the effectiveness of our approach on two challenging use cases in colonoscopy: haustral fold segmentation (indicative of missed surface) and realistic colonoscopy simulator video generation. We also release a first-of-its kind temporal dataset for colonoscopy for the above use cases. The datasets, accompanying code, and pretrained models will be made available on our Computational Endoscopy Platform GitHub (https://github.com/nadeemlab/CEP). The supplementary video is available at https://youtu.be/UMVP-uIXwWk.