🤖 AI Summary
The theoretical understanding of cross-entropy (CE) loss optimization in non-convex deep learning remains limited, especially regarding global dynamics. Method: Focusing on the minimal non-convex setting—two-layer linear networks with standard basis inputs—we rigorously analyze the CE gradient flow. We discover that Hadamard initialization diagonalizes the softmax operator and freezes singular vectors of the weight matrices; leveraging this, we construct an explicit Lyapunov function. Contribution/Results: We establish the first global convergence guarantee for CE gradient flow to neural collapse—a geometric configuration where features collapse onto class means, inter-class means become orthogonal and equidistant, and intra-class variance vanishes. This result breaks prior reliance on squared loss or convexity assumptions, providing the first non-convex, multi-class, globally convergent theory for CE optimization. It further reveals the critical role of implicit regularization in realistic training dynamics.
📝 Abstract
Cross-entropy (CE) training loss dominates deep learning practice, yet existing theory often relies on simplifications, either replacing it with squared loss or restricting to convex models, that miss essential behavior. CE and squared loss generate fundamentally different dynamics, and convex linear models cannot capture the complexities of non-convex optimization. We provide an in-depth characterization of multi-class CE optimization dynamics beyond the convex regime by analyzing a canonical two-layer linear neural network with standard-basis vectors as inputs: the simplest non-convex extension for which the implicit bias remained unknown. This model coincides with the unconstrained features model used to study neural collapse, making our work the first to prove that gradient flow on CE converges to the neural collapse geometry. We construct an explicit Lyapunov function that establishes global convergence, despite the presence of spurious critical points in the non-convex landscape. A key insight underlying our analysis is an inconspicuous finding: Hadamard Initialization diagonalizes the softmax operator, freezing the singular vectors of the weight matrices and reducing the dynamics entirely to their singular values. This technique opens a pathway for analyzing CE training dynamics well beyond our specific setting considered here.