🤖 AI Summary
Thermal imaging offers superior visibility in dark and adverse conditions, yet its low dynamic range and inter-frame photometric instability often lead to floating artifacts in novel view synthesis. This work presents the first systematic analysis of the core challenges inherent in using pure thermal imagery for novel view synthesis and introduces a general-purpose solution that requires neither RGB guidance nor dataset-specific hyperparameter tuning. By integrating lightweight dynamic range expansion and photometric stability preprocessing with 3D Gaussian splatting rendering, the proposed method significantly enhances image quality and temporal consistency. It achieves state-of-the-art performance across multiple pure thermal imaging benchmarks, substantially outperforming existing approaches.
📝 Abstract
Thermal cameras provide reliable visibility in darkness and adverse conditions, but thermal imagery remains significantly harder to use for novel view synthesis (NVS) than visible-light images. This difficulty stems primarily from two characteristics of affordable thermal sensors. First, thermal images have extremely low dynamic range, which weakens appearance cues and limits the gradients available for optimization. Second, thermal data exhibit rapid frame-to-frame photometric fluctuations together with slow radiometric drift, both of which destabilize correspondence estimation and create high-frequency floater artifacts during view synthesis, particularly when no RGB guidance (beyond camera pose) is available. Guided by these observations, we introduce a lightweight preprocessing and splatting pipeline that expands usable dynamic range and stabilizes per-frame photometry. Our approach achieves state-of-the-art performance across thermal-only NVS benchmarks, without requiring any dataset-specific tuning.