🤖 AI Summary
To address the prevalent challenges in low-cost thermal imaging—namely, low spatial resolution, fixed-pattern noise, and spatially varying degradations—as well as the scarcity and limited diversity of existing thermal image datasets, this paper proposes the first block-based diffusion framework tailored for thermal image restoration. Our method models a generative prior over overlapping image patches and integrates spatial windowing with block-level overlapping inference to achieve unified denoising, super-resolution, and deblurring. A novel plug-and-play prior module is introduced to explicitly encode thermal-specific local degradation characteristics. Extensive experiments on both synthetic and real-world thermal images demonstrate that our approach consistently outperforms state-of-the-art methods in quantitative metrics (PSNR/SSIM) and perceptual quality, validating its effectiveness, generalizability, and practical applicability.
📝 Abstract
Thermal images from low-cost cameras often suffer from low resolution, fixed pattern noise, and other localized degradations. Available datasets for thermal imaging are also limited in both size and diversity. To address these challenges, we propose a patch-based diffusion framework (TDiff) that leverages the local nature of these distortions by training on small thermal patches. In this approach, full-resolution images are restored by denoising overlapping patches and blending them using smooth spatial windowing. To our knowledge, this is the first patch-based diffusion framework that models a learned prior for thermal image restoration across multiple tasks. Experiments on denoising, super-resolution, and deblurring demonstrate strong results on both simulated and real thermal data, establishing our method as a unified restoration pipeline.