๐ค AI Summary
This work addresses the instability of conventional machine learningโbased reduced-order models for stiff dynamical systems under explicit integration, a challenge exacerbated by the high computational cost and low training efficiency of implicit methods. The authors propose Trajectory-Optimized Time Reparameterization (TOTR), which formulates time remapping as an arc-length coordinate optimization problem aimed at maximizing trajectory smoothness. By minimizing the acceleration of the reparameterized trajectory, TOTR substantially enhances its learnability while enabling efficient explicit integration. Evaluated on three classes of stiff systems, the method achieves training losses one to two orders of magnitude lower than existing benchmarks and significantly improves prediction accuracy in physical time.
๐ Abstract
Stiff dynamical systems present a challenge for machine-learning reduced-order models (ML-ROMs), as explicit time integration becomes unstable in stiff regimes while implicit integration within learning loops is computationally expensive and often degrades training efficiency. Time reparameterization (TR) offers an alternative by transforming the independent variable so that rapid physical-time transients are spread over a stretched-time coordinate, enabling stable explicit integration on uniformly sampled grids. Although several TR strategies have been proposed, their effect on learnability in ML-ROMs remains incompletely understood. This work investigates time reparameterization as a stiffness-mitigation mechanism for neural ODE reduced-order modeling and introduces a trajectory-optimized TR (TOTR) formulation. The proposed approach casts time reparameterization as an optimization problem in arc-length coordinates, in which a traversal-speed profile is selected to penalize acceleration in stretched time. By targeting the smoothness of the training dynamics, this formulation produces reparameterized trajectories that are better conditioned and easier to learn than existing TR methods. TOTR is evaluated on three stiff problems: a parameterized stiff linear system, the van der Pol oscillator, and the HIRES chemical kinetics model. Across all cases, the proposed approach yields smoother reparameterizations and improved physical-time predictions under identical training regimens than other TR approaches. Quantitative results demonstrate loss reductions of one to two orders of magnitude compared to benchmark algorithms. These results highlight that effective stiffness mitigation in ML-ROMs depends critically on the regularity and learnability of the time map itself, and that optimization-based TR provides a robust framework for explicit reduced-order modeling of multiscale dynamical systems.