🤖 AI Summary
Fluorescence molecular tomography (FMT) suffers from poor axial (z-direction) resolution, limited quantitative accuracy of conventional iterative reconstruction, and strong reliance on large-scale paired training data in supervised deep learning approaches. To address these challenges, we propose the Warm-Basis Iterative Projection Method (WB-IPM), a synergistic learning-iterative framework: a lightweight neural network generates a physically consistent warm start as the initial basis; this is refined via an embedded differentiable iterative projection module; and a directional loss function enforces only angular consistency—rather than pixel-wise fidelity—between reconstructed and ground-truth distributions, drastically reducing dependence on high-quality annotations. Theoretical analysis characterizes the coupling mechanism between learned priors and physical forward models. Extensive simulation and experimental studies demonstrate that WB-IPM achieves superior quantitative accuracy, enhanced z-resolution, and improved reconstruction stability compared to both conventional iterative methods and end-to-end learning approaches.
📝 Abstract
Fluorescence Molecular Tomography (FMT) is a widely used non-invasive optical imaging technology in biomedical research. It usually faces significant accuracy challenges in depth reconstruction, and conventional iterative methods struggle with poor $z$-resolution even with advanced regularization. Supervised learning approaches can improve recovery accuracy but rely on large, high-quality paired training dataset that is often impractical to acquire in practice. This naturally raises the question of how learning-based approaches can be effectively combined with iterative schemes to yield more accurate and stable algorithms. In this work, we present a novel warm-basis iterative projection method (WB-IPM) and establish its theoretical underpinnings. The method is able to achieve significantly more accurate reconstructions than the learning-based and iterative-based methods. In addition, it allows a weaker loss function depending solely on the directional component of the difference between ground truth and neural network output, thereby substantially reducing the training effort. These features are justified by our error analysis as well as simulated and real-data experiments.