🤖 AI Summary
Existing DARTS methods impose topological priors—such as cell homogeneity and dual-input-node constraints—limiting architectural flexibility and search potential. This paper proposes a prior-free DARTS framework: it removes all structural constraints on the cell, enabling fully unconstrained cell topology design; introduces Entropy-driven Super-network Shrinking (ESS), a novel differentiable discretization alternative that jointly optimizes architecture weights and dynamically prunes edge connections, thereby expanding the search space while preserving optimization stability. Crucially, our approach decouples the homogeneity assumption from input-connection constraints for the first time, supporting single-shot differentiable search to yield multiple Pareto-optimal architectures. On ImageNet, the discovered models achieve state-of-the-art accuracy while reducing FLOPs by 12–18% compared to leading DARTS variants.
📝 Abstract
Strong priors are imposed on the search space of Differentiable Architecture Search (DARTS), such that cells of the same type share the same topological structure and each intermediate node retains two operators from distinct nodes. While these priors reduce optimization difficulties and improve the applicability of searched architectures, they hinder the subsequent development of automated machine learning (Auto-ML) and prevent the optimization algorithm from exploring more powerful neural networks through improved architectural flexibility. This paper aims to reduce these prior constraints by eliminating restrictions on cell topology and modifying the discretization mechanism for super-networks. Specifically, the Flexible DARTS (FX-DARTS) method, which leverages an Entropy-based Super-Network Shrinking (ESS) framework, is presented to address the challenges arising from the elimination of prior constraints. Notably, FX-DARTS enables the derivation of neural architectures without strict prior rules while maintaining the stability in the enlarged search space. Experimental results on image classification benchmarks demonstrate that FX-DARTS is capable of exploring a set of neural architectures with competitive trade-offs between performance and computational complexity within a single search procedure.