Subnet-Aware Dynamic Supernet Training for Neural Architecture Search

📅 2025-03-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In N-shot neural architecture search (NAS), static supernet training induces unfairness among subnetworks—favoring low-complexity architectures—and amplifies momentum-based gradient noise. To address these issues, we propose a subnet-aware dynamic training paradigm. Our key contributions are: (1) Complexity-Aware Learning Rate scheduling (CaLR), which adaptively modulates learning rates based on subnetwork computational complexity to mitigate optimization bias; and (2) Momentum Separation (MS), which maintains independent momentum buffers for subnetworks of differing complexities to suppress gradient noise propagation. The method seamlessly integrates with weight-sharing supernets and multi-subnet joint training, incurring negligible computational overhead. Extensive experiments on NAS-Bench-201, the MobileNet search space, and CIFAR/ImageNet benchmarks demonstrate significant improvements in search performance. Moreover, our approach is plug-and-play compatible with mainstream N-shot NAS frameworks.

Technology Category

Application Category

📝 Abstract
N-shot neural architecture search (NAS) exploits a supernet containing all candidate subnets for a given search space. The subnets are typically trained with a static training strategy (e.g., using the same learning rate (LR) scheduler and optimizer for all subnets). This, however, does not consider that individual subnets have distinct characteristics, leading to two problems: (1) The supernet training is biased towards the low-complexity subnets (unfairness); (2) the momentum update in the supernet is noisy (noisy momentum). We present a dynamic supernet training technique to address these problems by adjusting the training strategy adaptive to the subnets. Specifically, we introduce a complexity-aware LR scheduler (CaLR) that controls the decay ratio of LR adaptive to the complexities of subnets, which alleviates the unfairness problem. We also present a momentum separation technique (MS). It groups the subnets with similar structural characteristics and uses a separate momentum for each group, avoiding the noisy momentum problem. Our approach can be applicable to various N-shot NAS methods with marginal cost, while improving the search performance drastically. We validate the effectiveness of our approach on various search spaces (e.g., NAS-Bench-201, Mobilenet spaces) and datasets (e.g., CIFAR-10/100, ImageNet).
Problem

Research questions and friction points this paper is trying to address.

Addresses unfairness in supernet training towards low-complexity subnets.
Reduces noisy momentum updates in supernet training.
Introduces adaptive training strategies for individual subnet characteristics.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dynamic supernet training adapts to subnet characteristics
Complexity-aware LR scheduler adjusts learning rates
Momentum separation groups subnets to reduce noise
🔎 Similar Papers
No similar papers found.
J
Jeimin Jeon
Yonsei University, Articron Inc.
Y
Youngmin Oh
Yonsei University
J
Junghyup Lee
Samsung Research
D
Donghyeon Baek
Yonsei University
D
Dohyung Kim
Samsung Advanced Institute of Technology
Chanho Eom
Chanho Eom
Assistant Professor @Chung-Ang University
Computer VisionMachine LearningArtificial Intelligence
Bumsub Ham
Bumsub Ham
Yonsei University
Computer visionImage processing