🤖 AI Summary
Turbulent super-resolution models often rely on explicit data augmentation or specialized equivariant architectures to enforce rotational equivariance, yet the potential for implicit learning of such symmetries remains underexplored.
Method: Leveraging Kolmogorov’s hypothesis of local isotropy, we investigate whether standard 3D CNNs can implicitly learn rotational equivariance from the intrinsic statistical isotropy of turbulent fields. We train models on channel flow data from regions with varying anisotropy—spanning the isotropic center-plane to the highly anisotropic near-wall region—and systematically analyze the relationship between spatiotemporal sampling density and equivariance error.
Contribution/Results: We demonstrate that models trained in more isotropic regions exhibit significantly lower rotational equivariance error; increasing spatiotemporal sampling further reduces this error. This work is the first to reveal a scale-dependent effect of data-intrinsic symmetry in deep learning for turbulence, and proposes a design principle distinguishing when implicit symmetry learning suffices versus when explicit architectural enforcement is necessary.
📝 Abstract
The immense computational cost of simulating turbulence has motivated the use of machine learning approaches for super-resolving turbulent flows. A central challenge is ensuring that learned models respect physical symmetries, such as rotational equivariance. We show that standard convolutional neural networks (CNNs) can partially acquire this symmetry without explicit augmentation or specialized architectures, as turbulence itself provides implicit rotational augmentation in both time and space. Using 3D channel-flow subdomains with differing anisotropy, we find that models trained on more isotropic mid-plane data achieve lower equivariance error than those trained on boundary layer data, and that greater temporal or spatial sampling further reduces this error. We show a distinct scale-dependence of equivariance error that occurs regardless of dataset anisotropy that is consistent with Kolmogorov's local isotropy hypothesis. These results clarify when rotational symmetry must be explicitly incorporated into learning algorithms and when it can be obtained directly from turbulence, enabling more efficient and symmetry-aware super-resolution.