🤖 AI Summary
Deep Image Prior (DIP) suffers from overfitting and limited reconstruction quality in ill-posed 3D inverse problems. Method: We propose Tada-DIP—the first input-adaptive, unsupervised framework for 3D image reconstruction—requiring no training data. It integrates input-driven network initialization, gradient-domain denoising regularization, and an early-stopping optimization strategy to enable high-fidelity single-shot reconstruction. Contribution/Results: Tada-DIP introduces, for the first time, input adaptivity into the 3D DIP paradigm and explicitly regularizes the optimization trajectory to suppress overfitting. In sparse-view CT reconstruction, it significantly outperforms existing unsupervised baselines and matches the performance of fully supervised models trained on complete datasets. This establishes a highly efficient, robust, and data-free solution paradigm for ill-posed 3D inverse problems.
📝 Abstract
Deep Image Prior (DIP) has recently emerged as a promising one-shot neural-network based image reconstruction method. However, DIP has seen limited application to 3D image reconstruction problems. In this work, we introduce Tada-DIP, a highly effective and fully 3D DIP method for solving 3D inverse problems. By combining input-adaptation and denoising regularization, Tada-DIP produces high-quality 3D reconstructions while avoiding the overfitting phenomenon that is common in DIP. Experiments on sparse-view X-ray computed tomography reconstruction validate the effectiveness of the proposed method, demonstrating that Tada-DIP produces much better reconstructions than training-data-free baselines and achieves reconstruction performance on par with a supervised network trained using a large dataset with fully-sampled volumes.