🤖 AI Summary
To address catastrophic forgetting in in-situ training of implicit neural compressors for scientific simulations, this paper proposes a sketch-regularized continual learning framework. The method introduces a two-tiered memory architecture comprising a full-data cache and a lightweight sketch buffer; theoretically grounded in the Johnson–Lindenstrauss lemma, sketches serve as structured regularization terms that constrain the evolution of implicit neural representations. Leveraging a hypernetwork architecture, the approach enables online compression of unstructured grids and complex geometric domains under strict memory constraints. Experiments on high-fidelity 2D/3D scientific simulation data demonstrate stable reconstruction at high compression ratios (>100×), achieving PSNR values within 0.8 dB of offline training baselines. The proposed method significantly enhances robustness and generalization capability of in-situ training while preserving fidelity and computational efficiency.
📝 Abstract
Focusing on implicit neural representations, we present a novel in situ training protocol that employs limited memory buffers of full and sketched data samples, where the sketched data are leveraged to prevent catastrophic forgetting. The theoretical motivation for our use of sketching as a regularizer is presented via a simple Johnson-Lindenstrauss-informed result. While our methods may be of wider interest in the field of continual learning, we specifically target in situ neural compression using implicit neural representation-based hypernetworks. We evaluate our method on a variety of complex simulation data in two and three dimensions, over long time horizons, and across unstructured grids and non-Cartesian geometries. On these tasks, we show strong reconstruction performance at high compression rates. Most importantly, we demonstrate that sketching enables the presented in situ scheme to approximately match the performance of the equivalent offline method.