OGF: An Online Gradient Flow Method for Optimizing the Statistical Steady-State Time Averages of Unsteady Turbulent Flows

📅 2025-07-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Gradient-based optimization of stationary statistical quantities (e.g., time-averaged physical fields) in chaotic turbulent flows remains intractable—conventional adjoint methods suffer from exponential gradient divergence due to chaotic sensitivity and scale poorly to high-resolution simulations. Method: We propose a scalable online gradient flow method that integrates real-time forward propagation, finite-difference gradient estimation, and online parameter updates, thereby circumventing the instability inherent in reverse-mode differentiation. Contribution/Results: This is the first method enabling efficient gradient optimization of stationary statistics for high-dimensional chaotic systems—including Lorenz-63, Kuramoto–Sivashinsky, and compressible Navier–Stokes equations. Experiments demonstrate several-orders-of-magnitude reduction in objective loss and high-fidelity inversion of optimal parameters. The framework establishes the first statistically rigorous, numerically stable, and computationally scalable optimization paradigm for engineering applications such as geometric design and flow control.

Technology Category

Application Category

📝 Abstract
Turbulent flows are chaotic and unsteady, but their statistical distribution converges to a statistical steady state. Engineering quantities of interest typically take the form of time-average statistics such as $ frac{1}{t} int_0^t f ( u(x,τ; θ) ) dτoverset{t ightarrow infty}{ ightarrow} F(x; θ)$, where $u(x,t; θ)$ are solutions of the Navier--Stokes equations with parameters $θ$. Optimizing over $F(x; θ)$ has many engineering applications including geometric optimization, flow control, and closure modeling. However, this remains an open challenge, as existing computational approaches are incapable of scaling to physically representative numbers of grid points. The fundamental obstacle is the chaoticity of turbulent flows: gradients calculated with the adjoint method diverge exponentially as $t ightarrow infty$. We develop a new online gradient-flow (OGF) method that is scalable to large degree-of-freedom systems and enables optimizing for the steady-state statistics of chaotic, unsteady, turbulence-resolving simulations. The method forward-propagates an online estimate for the gradient of $F(x; θ)$ while simultaneously performing online updates of the parameters $θ$. A key feature is the fully online nature of the algorithm to facilitate faster optimization progress and its combination with a finite-difference estimator to avoid the divergence of gradients due to chaoticity. The proposed OGF method is demonstrated for optimizations over three chaotic ordinary and partial differential equations: the Lorenz-63 equation, the Kuramoto--Sivashinsky equation, and Navier--Stokes solutions of compressible, forced, homogeneous isotropic turbulence. In each case, the OGF method successfully reduces the loss based on $F(x; θ)$ by several orders of magnitude and accurately recovers the optimal parameters.
Problem

Research questions and friction points this paper is trying to address.

Optimizing steady-state statistics of chaotic turbulent flows
Overcoming gradient divergence in large-scale turbulent simulations
Developing scalable online gradient-flow method for turbulence optimization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Online gradient-flow method for steady-state statistics
Combines finite-difference estimator with online updates
Scalable to large chaotic turbulent flow systems
🔎 Similar Papers
No similar papers found.