🤖 AI Summary
Existing analyses of gradient flow in deep linear convolutional networks rely on restrictive initialization balance conditions, limiting their applicability.
Method: We propose a unified geometric modeling framework that establishes a rigorous equivalence between parameter-space gradient flow and Riemannian gradient flow in function space—without assuming any initialization-dependent constraints. Leveraging differential geometry and optimization theory, we exploit the structural constraints of convolutional kernels to construct an immersion from parameter space to function space, explicitly deriving the induced Riemannian metric and characterizing how initialization affects it.
Contribution/Results: We prove that this equivalence holds universally for arbitrary initialization, arbitrary depth, and both high-dimensional and one-dimensional convolutions (including strided convolutions with stride > 1). This yields the first geometric framework for gradient dynamics applicable to general convolutional architectures, offering a principled, initialization-agnostic perspective on optimization in deep learning.
📝 Abstract
We study geometric properties of the gradient flow for learning deep linear convolutional networks. For linear fully connected networks, it has been shown recently that the corresponding gradient flow on parameter space can be written as a Riemannian gradient flow on function space (i.e., on the product of weight matrices) if the initialization satisfies a so-called balancedness condition. We establish that the gradient flow on parameter space for learning linear convolutional networks can be written as a Riemannian gradient flow on function space regardless of the initialization. This result holds for $D$-dimensional convolutions with $D geq 2$, and for $D =1$ it holds if all so-called strides of the convolutions are greater than one. The corresponding Riemannian metric depends on the initialization.