🤖 AI Summary
Conventional neural radiance fields (NeRFs) struggle in neurosurgical settings due to extreme scarcity of intraoperative multi-view imagery. Method: We propose the first single-image NeRF reconstruction framework tailored for neurosurgery. It integrates preoperative MRI priors to guide synthetic view generation and employs a synergistic neural style transfer pipeline—combining WTC2 and STROTSS—to achieve cross-modal translation from MRI to intraoperative appearance, thereby constructing a high-fidelity synthetic multi-view dataset for rapid, single-image-driven NeRF training. Results: Evaluated on four real neurosurgical cases, our method achieves an SSIM of 0.87±0.03 for synthesized views, with significantly improved texture fidelity over baselines and reconstruction quality approaching that of multi-view NeRFs. It enables real-time novel-view synthesis and 3D navigation. This work pioneers single-image NeRF application in surgery, overcoming the long-standing bottleneck of 3D modeling under severe intraoperative data constraints.
📝 Abstract
Purpose: Neural Radiance Fields (NeRF) offer exceptional capabilities for 3D reconstruction and view synthesis, yet their reliance on extensive multi-view data limits their application in surgical intraoperative settings where only limited data is available. In particular, collecting such extensive data intraoperatively is impractical due to time constraints. This work addresses this challenge by leveraging a single intraoperative image and preoperative data to train NeRF efficiently for surgical scenarios.
Methods: We leverage preoperative MRI data to define the set of camera viewpoints and images needed for robust and unobstructed training. Intraoperatively, the appearance of the surgical image is transferred to the pre-constructed training set through neural style transfer, specifically combining WTC2 and STROTSS to prevent over-stylization. This process enables the creation of a dataset for instant and fast single-image NeRF training.
Results: The method is evaluated with four clinical neurosurgical cases. Quantitative comparisons to NeRF models trained on real surgical microscope images demonstrate strong synthesis agreement, with similarity metrics indicating high reconstruction fidelity and stylistic alignment. When compared with ground truth, our method demonstrates high structural similarity, confirming good reconstruction quality and texture preservation.
Conclusion: Our approach demonstrates the feasibility of single-image NeRF training in surgical settings, overcoming the limitations of traditional multi-view methods.