🤖 AI Summary
This work addresses the loss of high-frequency physical details in early simulation stages under fixed memory budgets, a common issue in existing neural simulators due to suboptimal state representation design. To tackle this, the authors propose DerivOpt, a novel framework that elevates state representation to a core design dimension. By analyzing distortion discrepancies between original and derived fields throughout the compression–quantization–decoding pipeline, DerivOpt optimizes storage allocation across multiple physical fields. Leveraging a calibrated channel model and analysis of the periodic incompressible Navier–Stokes equations, the framework establishes a general strategy for state field selection. Evaluated on full-horizon forward tasks in PDEBench, the method significantly reduces average inference nRMSE and demonstrates superior fine-scale fidelity compared to strong baselines, with advantages evident from the very first input step.
📝 Abstract
Fine-scale-faithful neural simulation under fixed storage budgets remains challenging. Many existing methods reduce high-frequency error by improving architectures, training objectives, or rollout strategies. However, under budgeted coarsen-quantize-decode pipelines, fine detail can already be lost when the carried state is constructed. In the canonical periodic incompressible Navier-Stokes setting, we show that primitive and derived fields undergo systematically different retained-band distortions under the same operator. Motivated by this observation, we formulate Derived-Field Optimization (DerivOpt), a general state-design framework that chooses which physical fields are carried and how storage budget is allocated across them under a calibrated channel model. Across the full time-dependent forward subset of PDEBench, DerivOpt not only improves pooled mean rollout nRMSE, but also delivers a decisive advantage in fine-scale fidelity over a broad set of strong baselines. More importantly, the gains are already visible at input time, before rollout learning begins. This indicates that the carried state is often the dominant bottleneck under tight storage budgets. These results suggest a broader conclusion: in budgeted neural simulation, carried-state design should be treated as a first-class design axis alongside architecture, loss, and rollout strategy.