🤖 AI Summary
Existing neural architecture search (NAS) methods suffer from three critical bottlenecks: poor cross-scenario architectural adaptability, high search cost in single environments, and unstable cross-platform deployment performance. To address these, we propose a data-aware continuous adaptive NAS framework. Our method introduces: (1) continuous architectural distribution modeling with a learnable gating mechanism, enabling smooth, differentiable architecture evolution; (2) a multi-stage joint optimization strategy that unifies the search space for efficient cross-device adaptation; and (3) hardware-aware sampling grounded in distribution learning, jointly optimizing accuracy and deployment constraints. Evaluated on five benchmark datasets, our approach consistently outperforms state-of-the-art methods, reducing search overhead by 30–50% while maintaining robust performance across heterogeneous computational resources. To the best of our knowledge, this is the first work to achieve end-to-end hardware-adaptive architecture generation.
📝 Abstract
Neural Architecture Search (NAS) has emerged as a powerful approach for automating neural network design. However, existing NAS methods face critical limitations in real-world deployments: architectures lack adaptability across scenarios, each deployment context requires costly separate searches, and performance consistency across diverse platforms remains challenging. We propose DANCE (Dynamic Architectures with Neural Continuous Evolution), which reformulates architecture search as a continuous evolution problem through learning distributions over architectural components. DANCE introduces three key innovations: a continuous architecture distribution enabling smooth adaptation, a unified architecture space with learned selection gates for efficient sampling, and a multi-stage training strategy for effective deployment optimization. Extensive experiments across five datasets demonstrate DANCE's effectiveness. Our method consistently outperforms state-of-the-art NAS approaches in terms of accuracy while significantly reducing search costs. Under varying computational constraints, DANCE maintains robust performance while smoothly adapting architectures to different hardware requirements. The code and appendix can be found at https://github.com/Applied-Machine-Learning-Lab/DANCE.