🤖 AI Summary
Training implicit neural representations (INRs) for time-varying volumetric data suffers from slow convergence and insufficient efficiency for interactive visualization. Method: We propose F-Hash encoding—a novel architecture that introduces the 4D hypercube (tesseract) structure into dynamic hash encoding, enabling conflict-free, multi-level feature-aware hashing to achieve high encoding capacity with compact parameters. Integrated with a 4D embedded grid and an adaptive ray-marching algorithm, F-Hash unifies INR training and rendering optimization. Contribution/Results: The method natively supports diverse time-varying feature detection and spatiotemporal super-resolution reconstruction. Evaluated on multiple large-scale time-varying volume datasets, it achieves state-of-the-art training convergence speed and significantly improves rendering frame rates—enabling real-time interactive visualization.
📝 Abstract
Interactive time-varying volume visualization is challenging due to its complex spatiotemporal features and sheer size of the dataset. Recent works transform the original discrete time-varying volumetric data into continuous Implicit Neural Representations (INR) to address the issues of compression, rendering, and super-resolution in both spatial and temporal domains. However, training the INR takes a long time to converge, especially when handling large-scale time-varying volumetric datasets. In this work, we proposed F-Hash, a novel feature-based multi-resolution Tesseract encoding architecture to greatly enhance the convergence speed compared with existing input encoding methods for modeling time-varying volumetric data. The proposed design incorporates multi-level collision-free hash functions that map dynamic 4D multi-resolution embedding grids without bucket waste, achieving high encoding capacity with compact encoding parameters. Our encoding method is agnostic to time-varying feature detection methods, making it a unified encoding solution for feature tracking and evolution visualization. Experiments show the F-Hash achieves state-of-the-art convergence speed in training various time-varying volumetric datasets for diverse features. We also proposed an adaptive ray marching algorithm to optimize the sample streaming for faster rendering of the time-varying neural representation.