🤖 AI Summary
Existing compression and visualization methods for multivariate scientific data (tens to hundreds of variables) struggle to simultaneously model inter-variable dependencies, ensure high-fidelity reconstruction, and achieve storage efficiency. To address this, we propose a unified compression framework based on shared-parameter implicit neural representations (INRs), wherein a single deep network jointly learns compact, continuous mappings for all variables in an end-to-end manner—explicitly preserving their spatial-semantic dependencies. This work introduces the first joint implicit compression scheme for multivariate fields, enabling both high-fidelity reconstruction and high-quality differentiable rendering while significantly improving compression ratios and storage efficiency. Experimental results demonstrate state-of-the-art performance across reconstruction error, visual quality, and compression rate. Our approach establishes a new paradigm for efficient analysis and interactive visualization of large-scale multivariate scientific data.
📝 Abstract
The extensive adoption of Deep Neural Networks has led to their increased utilization in challenging scientific visualization tasks. Recent advancements in building compressed data models using implicit neural representations have shown promising results for tasks like spatiotemporal volume visualization and super-resolution. Inspired by these successes, we develop compressed neural representations for multivariate datasets containing tens to hundreds of variables. Our approach utilizes a single network to learn representations for all data variables simultaneously through parameter sharing. This allows us to achieve state-of-the-art data compression. Through comprehensive evaluations, we demonstrate superior performance in terms of reconstructed data quality, rendering and visualization quality, preservation of dependency information among variables, and storage efficiency.