🤖 AI Summary
To address the low long-term prediction accuracy and high computational cost in data-driven modeling of constrained multibody systems, this paper proposes the FNODE framework. Instead of end-to-end trajectory integration via conventional Neural ODEs, FNODE directly learns the acceleration vector field under explicit supervision. It introduces a Flow-Matching mechanism to construct differentiable acceleration targets and integrates Fast Fourier Transform (FFT) with finite-difference methods for efficient, numerically stable target acceleration estimation. This design circumvents the numerical bottlenecks associated with ODE solver backpropagation, significantly improving training efficiency and generalization. Evaluated on multiple standard multibody dynamics benchmarks, FNODE outperforms MBD-NODE, LSTM, and FCNN in prediction accuracy, long-term stability, and inference speed. The framework establishes a new, efficient, and physically interpretable paradigm for physics-informed neural modeling.
📝 Abstract
Data-driven modeling of constrained multibody systems faces two persistent challenges: high computational cost and limited long-term prediction accuracy. To address these issues, we introduce the Flow-Matching Neural Ordinary Differential Equation (FNODE), a framework that learns acceleration vector fields directly from trajectory data. By reformulating the training objective to supervise accelerations rather than integrated states, FNODE eliminates the need for backpropagation through an ODE solver, which represents a bottleneck in traditional Neural ODEs. Acceleration targets are computed efficiently using numerical differentiation techniques, including a hybrid Fast Fourier Transform (FFT) and Finite Difference (FD) scheme. We evaluate FNODE on a diverse set of benchmarks, including the single and triple mass-spring-damper systems, double pendulum, slider-crank, and cart-pole. Across all cases, FNODE consistently outperforms existing approaches such as Multi-Body Dynamic Neural ODE (MBD-NODE), Long Short-Term Memory (LSTM) networks, and Fully Connected Neural Networks (FCNN), demonstrating good accuracy, generalization, and computational efficiency.