🤖 AI Summary
Quantum neural networks (QNNs) suffer from poor scalability and computational efficiency due to hardware constraints on qubit count and excessive entanglement. To address this, we propose Adaptive Threshold Pruning (ATP), a novel data-encoding technique for QNNs that dynamically quantifies feature importance and sparsifies quantum state encoding. ATP jointly leverages entanglement entropy analysis and Fast Gradient Sign Method (FGSM)-based adversarial training to simultaneously reduce circuit complexity and entanglement entropy. Notably, ATP is the first method to introduce adaptive pruning directly into the QNN input encoding stage, preserving model robustness while significantly reducing resource overhead. Experiments across multiple benchmark datasets demonstrate that ATP achieves accuracy comparable to full-feature encoding using substantially fewer qubits, while also improving adversarial accuracy. This work provides a practical pathway toward deploying QNNs under stringent resource constraints.
📝 Abstract
Quantum Neural Networks (QNNs) offer promising capabilities for complex data tasks, but are often constrained by limited qubit resources and high entanglement, which can hinder scalability and efficiency. In this paper, we introduce Adaptive Threshold Pruning (ATP), an encoding method that reduces entanglement and optimizes data complexity for efficient computations in QNNs. ATP dynamically prunes non-essential features in the data based on adaptive thresholds, effectively reducing quantum circuit requirements while preserving high performance. Extensive experiments across multiple datasets demonstrate that ATP reduces entanglement entropy and improves adversarial robustness when combined with adversarial training methods like FGSM. Our results highlight ATPs ability to balance computational efficiency and model resilience, achieving significant performance improvements with fewer resources, which will help make QNNs more feasible in practical, resource-constrained settings.