🤖 AI Summary
This work addresses the challenge of simultaneously optimizing accuracy, trainability, and resource efficiency in quantum neural networks (QNNs) on noisy intermediate-scale quantum (NISQ) devices, where circuit cutting incurs prohibitive exponential overhead. To this end, we propose QNAS, a novel framework that unifies hardware-aware evaluation, multi-objective optimization, and explicit modeling of cutting costs within quantum neural architecture search. Leveraging a shared-parameter SuperCircuit and the NSGA-II algorithm, QNAS jointly optimizes validation error, a proxy for runtime, and the number of subcircuits to automatically discover Pareto-optimal architectures. Experiments demonstrate that QNAS achieves 97.16%, 87.38%, and 100% accuracy on MNIST, Fashion-MNIST, and Iris datasets using only 2-layer circuits with 8, 5, and 4 qubits, respectively, effectively balancing performance, resource consumption, and deployment feasibility.
📝 Abstract
Designing quantum neural networks (QNNs) that are both accurate and deployable on NISQ hardware is challenging. Handcrafted ansatze must balance expressivity, trainability, and resource use, while limited qubits often necessitate circuit cutting. Existing quantum architecture search methods primarily optimize accuracy while only heuristically controlling quantum and mostly ignore the exponential overhead of circuit cutting. We introduce QNAS, a neural architecture search framework that unifies hardware aware evaluation, multi objective optimization, and cutting overhead awareness for hybrid quantum classical neural networks (HQNNs). QNAS trains a shared parameter SuperCircuit and uses NSGA-II to optimize three objectives jointly: (i) validation error, (ii) a runtime cost proxy measuring wall clock evaluation time, and (iii) the estimated number of subcircuits under a target qubit budget. QNAS evaluates candidate HQNNs under a few epochs of training and discovers clear Pareto fronts that reveal tradeoffs between accuracy, efficiency, and cutting overhead. Across MNIST, Fashion-MNIST, and Iris benchmarks, we observe that embedding type and CNOT mode selection significantly impact both accuracy and efficiency, with angle-y embedding and sparse entangling patterns outperforming other configurations on image datasets, and amplitude embedding excelling on tabular data (Iris). On MNIST, the best architecture achieves 97.16% test accuracy with a compact 8 qubit, 2 layer circuit; on the more challenging Fashion-MNIST, 87.38% with a 5 qubit, 2 layer circuit; and on Iris, 100% validation accuracy with a 4 qubit, 2 layer circuit. QNAS surfaces these design insights automatically during search, guiding practitioners toward architectures that balance accuracy, resource efficiency, and practical deployability on current hardware.