🤖 AI Summary
Clinical ultrasound videos suffer from low signal-to-noise ratio (SNR) and low spatial resolution, compounded by domain shifts arising from heterogeneous imaging devices and acquisition protocols—severely limiting the cross-device generalizability of pretrained models. To address this, we propose a self-supervised video super-resolution method that operates without paired data and jointly performs blind denoising and super-resolution. Our key contribution is the first video-adaptive Deep Universal Prior (DUP) framework, which integrates neural implicit priors, temporal modeling, blind degradation estimation, and end-to-end joint optimization—eliminating reliance on paired training data or device-specific priors. Experiments demonstrate significant improvements over state-of-the-art methods in PSNR (+2.1 dB) and SSIM (+0.045), superior visual reconstruction quality, and enhanced accuracy in downstream diagnostic tasks, confirming strong clinical applicability and cross-device generalization capability.
📝 Abstract
Ultrasound imaging is widely applied in clinical practice, yet ultrasound videos often suffer from low signal-to-noise ratios (SNR) and limited resolutions, posing challenges for diagnosis and analysis. Variations in equipment and acquisition settings can further exacerbate differences in data distribution and noise levels, reducing the generalizability of pre-trained models. This work presents a self-supervised ultrasound video super-resolution algorithm called Deep Ultrasound Prior (DUP). DUP employs a video-adaptive optimization process of a neural network that enhances the resolution of given ultrasound videos without requiring paired training data while simultaneously removing noise. Quantitative and visual evaluations demonstrate that DUP outperforms existing super-resolution algorithms, leading to substantial improvements for downstream applications.