🤖 AI Summary
To address the substantial I/O and storage overhead in large-scale scientific simulation volume visualization—and the limitations of conventional image-based archiving (e.g., fixed viewpoints, transfer functions, and parameters)—this paper proposes ViSNeRF, the first multi-dimensional neural radiance field method tailored for scientific visualization. It models temporal steps, transfer functions, isosurfaces, and simulation parameters as continuous implicit variables, enabling 3D geometry-aware joint generalization under sparse image supervision. Built upon an enhanced NeRF architecture, ViSNeRF integrates multi-dimensional coordinate embedding, dynamic scene decomposition, and differentiable transfer function modeling. Evaluated on multiple real-world datasets, it achieves superior performance using fewer than 10% training images compared to state-of-the-art methods, with significant gains in PSNR and SSIM. Moreover, it supports real-time interactive parameter exploration and high-fidelity novel-view synthesis.
📝 Abstract
Domain scientists often face I/O and storage challenges when keeping raw data from large-scale simulations. Saving visualization images, albeit practical, is limited to preselected viewpoints, transfer functions, and simulation parameters. Recent advances in scientific visualization leverage deep learning techniques for visualization synthesis by offering effective ways to infer unseen visualizations when only image samples are given during training. However, due to the lack of 3D geometry awareness, existing methods typically require many training images and significant learning time to generate novel visualizations faithfully. To address these limitations, we propose ViSNeRF, a novel 3D-aware approach for visualization synthesis using neural radiance fields. Leveraging a multidimensional radiance field representation, ViSNeRF efficiently reconstructs visualizations of dynamic volumetric scenes from a sparse set of labeled image samples with flexible parameter exploration over transfer functions, isovalues, timesteps, or simulation parameters. Through qualitative and quantitative comparative evaluation, we demonstrate ViSNeRF's superior performance over several representative baseline methods, positioning it as the state-of-the-art solution. The code is available at https://github.com/JCBreath/ViSNeRF.