🤖 AI Summary
Vision Transformers (ViTs) incur prohibitive computational costs on high-resolution images due to the O(N²) complexity of self-attention. To address this, we propose the Hybrid Quantum Vision Transformer (HQViT), a classical-quantum co-designed architecture that reduces computational overhead while improving performance. Our key contributions are: (1) the first full-image amplitude encoding scheme—replacing conventional positional encoding—to enable global quantum state representation; (2) selective quantum acceleration applied only to critical attention coefficients, reducing qubit requirements to O(log₂N), thus enabling deployment on near-term NISQ devices; and (3) a parameterized quantum circuit integrated into a hybrid training framework, supporting end-to-end differentiable optimization. Evaluated on benchmarks including MNIST, HQViT achieves up to 10.9% higher accuracy than state-of-the-art methods, while reducing classical computational load by O(T²d).
📝 Abstract
Transformer-based architectures have revolutionized the landscape of deep learning. In computer vision domain, Vision Transformer demonstrates remarkable performance on par with or even surpassing that of convolutional neural networks. However, the quadratic computational complexity of its self-attention mechanism poses challenges for classical computing, making model training with high-dimensional input data, e.g., images, particularly expensive. To address such limitations, we propose a Hybrid Quantum Vision Transformer (HQViT), that leverages the principles of quantum computing to accelerate model training while enhancing model performance. HQViT introduces whole-image processing with amplitude encoding to better preserve global image information without additional positional encoding. By leveraging quantum computation on the most critical steps and selectively handling other components in a classical way, we lower the cost of quantum resources for HQViT. The qubit requirement is minimized to $O(log_2N)$ and the number of parameterized quantum gates is only $O(log_2d)$, making it well-suited for Noisy Intermediate-Scale Quantum devices. By offloading the computationally intensive attention coefficient matrix calculation to the quantum framework, HQViT reduces the classical computational load by $O(T^2d)$. Extensive experiments across various computer vision datasets demonstrate that HQViT outperforms existing models, achieving a maximum improvement of up to $10.9%$ (on the MNIST 10-classification task) over the state of the art. This work highlights the great potential to combine quantum and classical computing to cope with complex image classification tasks.