Implicit Augmentation from Distributional Symmetry in Turbulence Super-Resolution

📅 2025-09-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Turbulent super-resolution models often rely on explicit data augmentation or specialized equivariant architectures to enforce rotational equivariance, yet the potential for implicit learning of such symmetries remains underexplored. Method: Leveraging Kolmogorov’s hypothesis of local isotropy, we investigate whether standard 3D CNNs can implicitly learn rotational equivariance from the intrinsic statistical isotropy of turbulent fields. We train models on channel flow data from regions with varying anisotropy—spanning the isotropic center-plane to the highly anisotropic near-wall region—and systematically analyze the relationship between spatiotemporal sampling density and equivariance error. Contribution/Results: We demonstrate that models trained in more isotropic regions exhibit significantly lower rotational equivariance error; increasing spatiotemporal sampling further reduces this error. This work is the first to reveal a scale-dependent effect of data-intrinsic symmetry in deep learning for turbulence, and proposes a design principle distinguishing when implicit symmetry learning suffices versus when explicit architectural enforcement is necessary.

Technology Category

Application Category

📝 Abstract
The immense computational cost of simulating turbulence has motivated the use of machine learning approaches for super-resolving turbulent flows. A central challenge is ensuring that learned models respect physical symmetries, such as rotational equivariance. We show that standard convolutional neural networks (CNNs) can partially acquire this symmetry without explicit augmentation or specialized architectures, as turbulence itself provides implicit rotational augmentation in both time and space. Using 3D channel-flow subdomains with differing anisotropy, we find that models trained on more isotropic mid-plane data achieve lower equivariance error than those trained on boundary layer data, and that greater temporal or spatial sampling further reduces this error. We show a distinct scale-dependence of equivariance error that occurs regardless of dataset anisotropy that is consistent with Kolmogorov's local isotropy hypothesis. These results clarify when rotational symmetry must be explicitly incorporated into learning algorithms and when it can be obtained directly from turbulence, enabling more efficient and symmetry-aware super-resolution.
Problem

Research questions and friction points this paper is trying to address.

Ensuring learned turbulence super-resolution models respect physical rotational symmetries
Assessing whether standard CNNs can acquire rotational equivariance without explicit augmentation
Clarifying when rotational symmetry must be explicitly incorporated into learning algorithms
Innovation

Methods, ideas, or system contributions that make the work stand out.

Implicit rotational augmentation from turbulence data
Standard CNNs acquire symmetry without explicit augmentation
Scale-dependent equivariance error consistent with local isotropy
🔎 Similar Papers
No similar papers found.
Julia Balla
Julia Balla
Massachusetts Institute of Technology
Graph Representation LearningGeometric Deep LearningAI for Science
J
Jeremiah Bailey
Massachusetts Institute of Technology, Cambridge, MA 02139
A
Ali Backour
Massachusetts Institute of Technology, Cambridge, MA 02139
E
Elyssa Hofgard
Massachusetts Institute of Technology, Cambridge, MA 02139
Tommi Jaakkola
Tommi Jaakkola
MIT
machine learningnatural language processingbiomolecular design
Tess Smidt
Tess Smidt
Massachusetts Institute of Technology
PhysicsMachine LearningGeometry
R
Ryley McConkey
Massachusetts Institute of Technology, Cambridge, MA 02139