🤖 AI Summary
This work investigates the generalization essence of “winning lottery ticket” sparse subnetworks: whether a sparse architecture that trains independently to full-network performance can be identified without reliance on specific initializations or small-scale settings. The authors discover that parameter sign configurations alone encode critical generalization information—preserving only the sparse topology and signs (discarding magnitudes) suffices to recover the original network’s accuracy. They propose a sign inheritance mechanism, integrating iterative pruning with error-barrier optimization, achieving full-network-level performance across diverse architectures and datasets while substantially reducing dependence on normalization layers and precise weight initialization. Linear mode connectivity analysis further confirms the robust convergence of the resulting subnetworks. The code is publicly available.
📝 Abstract
The Lottery Ticket Hypothesis (LTH) posits the existence of a sparse subnetwork (a.k.a. winning ticket) that can generalize comparably to its over-parameterized counterpart when trained from scratch. The common approach to finding a winning ticket is to preserve the original strong generalization through Iterative Pruning (IP) and transfer information useful for achieving the learned generalization by applying the resulting sparse mask to an untrained network. However, existing IP methods still struggle to generalize their observations beyond ad-hoc initialization and small-scale architectures or datasets, or they bypass these challenges by applying their mask to trained weights instead of initialized ones. In this paper, we demonstrate that the parameter sign configuration plays a crucial role in conveying useful information for generalization to any randomly initialized network. Through linear mode connectivity analysis, we observe that a sparse network trained by an existing IP method can retain its basin of attraction if its parameter signs and normalization layer parameters are preserved. To take a step closer to finding a winning ticket, we alleviate the reliance on normalization layer parameters by preventing high error barriers along the linear path between the sparse network trained by our method and its counterpart with initialized normalization layer parameters. Interestingly, across various architectures and datasets, we observe that any randomly initialized network can be optimized to exhibit low error barriers along the linear path to the sparse network trained by our method by inheriting its sparsity and parameter sign information, potentially achieving performance comparable to the original. The code is available at https://github.com/JungHunOh/AWS_ICLR2025.git