Learning Dynamics from Input-Output Data with Hamiltonian Gaussian Processes

📅 2025-11-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Modeling non-conservative dynamical systems without velocity or momentum measurements remains challenging due to the difficulty of enforcing physical consistency in learned dynamics. Method: We propose the Non-Conservative Hamiltonian Gaussian Process (NCHGP) framework, which integrates Hamiltonian structure with Gaussian processes and incorporates an energy-conservation prior. It employs full Bayesian inference to jointly estimate latent states and heterogeneous hyperparameters, and leverages a low-rank approximation for scalable training and principled uncertainty quantification. Contribution/Results: To our knowledge, NCHGP is the first method enabling Hamiltonian-structured Bayesian modeling solely from input–output data—without requiring momentum observations. Experiments on nonlinear systems demonstrate substantial improvements in modeling accuracy and uncertainty calibration over state-of-the-art alternatives that rely on momentum measurements, while achieving superior robustness and data efficiency.

Technology Category

Application Category

📝 Abstract
Embedding non-restrictive prior knowledge, such as energy conservation laws, in learning-based approaches is a key motive to construct physically consistent models from limited data, relevant for, e.g., model-based control. Recent work incorporates Hamiltonian dynamics into Gaussian Process (GP) regression to obtain uncertainty-quantifying models that adhere to the underlying physical principles. However, these works rely on velocity or momentum data, which is rarely available in practice. In this paper, we consider dynamics learning with non-conservative Hamiltonian GPs, and address the more realistic problem setting of learning from input-output data. We provide a fully Bayesian scheme for estimating probability densities of unknown hidden states, of GP hyperparameters, as well as of structural hyperparameters, such as damping coefficients. Considering the computational complexity of GPs, we take advantage of a reduced-rank GP approximation and leverage its properties for computationally efficient prediction and training. The proposed method is evaluated in a nonlinear simulation case study and compared to a state-of-the-art approach that relies on momentum measurements.
Problem

Research questions and friction points this paper is trying to address.

Learning Hamiltonian dynamics from input-output data without velocity measurements
Estimating hidden states and hyperparameters using Bayesian inference
Developing computationally efficient Gaussian Process approximation for predictions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses Hamiltonian Gaussian Processes with input-output data
Employs Bayesian estimation for hidden states and parameters
Leverages reduced-rank GP approximation for computational efficiency
🔎 Similar Papers
No similar papers found.
J
Jan-Hendrik Ewering
Leibniz Universität Hannover, 30823 Garbsen, Germany
R
Robin E. Herrmann
Leibniz Universität Hannover, 30823 Garbsen, Germany
Niklas Wahlström
Niklas Wahlström
Associate Professor, Uppsala University, Sweden
Machine learningdeep learningsensor fusionstate estimationfiltering
T
Thomas B. Schön
Uppsala University, 751 05 Uppsala, Sweden
Thomas Seel
Thomas Seel
Leibniz Universität Hannover
Artificial IntelligenceSystems & ControlDigital HealthRoboticsAutonomous Systems