🤖 AI Summary
To address the discrete nature and high computational cost of neural architecture search (NAS), this paper proposes SPARCS—the first NAS method incorporating spectral analysis. SPARCS models the eigen-spectrum of inter-layer propagation matrices and parameterizes the architecture space on a continuous, differentiable manifold, enabling gradient-based end-to-end optimization. By imposing constraints on the spectral distribution, it jointly optimizes architectural expressivity and parameter efficiency, thereby inducing task-adapted, compact architectures automatically. On standard benchmarks, architectures discovered by SPARCS achieve significantly reduced parameter counts and improved inference latency, while matching or surpassing the accuracy of state-of-the-art differentiable NAS methods. This demonstrates the effectiveness and novelty of adopting a spectral-geometric perspective for encoding architectural priors in NAS.
📝 Abstract
Architecture design and optimization are challenging problems in the field of artificial neural networks. Working in this context, we here present SPARCS (SPectral ARchiteCture Search), a novel architecture search protocol which exploits the spectral attributes of the inter-layer transfer matrices. SPARCS allows one to explore the space of possible architectures by spanning continuous and differentiable manifolds, thus enabling for gradient-based optimization algorithms to be eventually employed. With reference to simple benchmark models, we show that the newly proposed method yields a self-emerging architecture with a minimal degree of expressivity to handle the task under investigation and with a reduced parameter count as compared to other viable alternatives.