HQViT: Hybrid Quantum Vision Transformer for Image Classification

📅 2025-04-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Vision Transformers (ViTs) incur prohibitive computational costs on high-resolution images due to the O(N²) complexity of self-attention. To address this, we propose the Hybrid Quantum Vision Transformer (HQViT), a classical-quantum co-designed architecture that reduces computational overhead while improving performance. Our key contributions are: (1) the first full-image amplitude encoding scheme—replacing conventional positional encoding—to enable global quantum state representation; (2) selective quantum acceleration applied only to critical attention coefficients, reducing qubit requirements to O(log₂N), thus enabling deployment on near-term NISQ devices; and (3) a parameterized quantum circuit integrated into a hybrid training framework, supporting end-to-end differentiable optimization. Evaluated on benchmarks including MNIST, HQViT achieves up to 10.9% higher accuracy than state-of-the-art methods, while reducing classical computational load by O(T²d).

Technology Category

Application Category

📝 Abstract
Transformer-based architectures have revolutionized the landscape of deep learning. In computer vision domain, Vision Transformer demonstrates remarkable performance on par with or even surpassing that of convolutional neural networks. However, the quadratic computational complexity of its self-attention mechanism poses challenges for classical computing, making model training with high-dimensional input data, e.g., images, particularly expensive. To address such limitations, we propose a Hybrid Quantum Vision Transformer (HQViT), that leverages the principles of quantum computing to accelerate model training while enhancing model performance. HQViT introduces whole-image processing with amplitude encoding to better preserve global image information without additional positional encoding. By leveraging quantum computation on the most critical steps and selectively handling other components in a classical way, we lower the cost of quantum resources for HQViT. The qubit requirement is minimized to $O(log_2N)$ and the number of parameterized quantum gates is only $O(log_2d)$, making it well-suited for Noisy Intermediate-Scale Quantum devices. By offloading the computationally intensive attention coefficient matrix calculation to the quantum framework, HQViT reduces the classical computational load by $O(T^2d)$. Extensive experiments across various computer vision datasets demonstrate that HQViT outperforms existing models, achieving a maximum improvement of up to $10.9%$ (on the MNIST 10-classification task) over the state of the art. This work highlights the great potential to combine quantum and classical computing to cope with complex image classification tasks.
Problem

Research questions and friction points this paper is trying to address.

Reduces quadratic complexity in Vision Transformers
Leverages quantum computing for efficient training
Enhances image classification performance significantly
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hybrid quantum-classical Vision Transformer for efficiency
Amplitude encoding preserves global image information
Minimizes qubit and quantum gate requirements
🔎 Similar Papers
No similar papers found.
H
Hui Zhang
Faculty of Innovation Engineering, Macau University of Science and Technology, Macao 999078, China
Q
Qinglin Zhao
Faculty of Innovation Engineering, Macau University of Science and Technology, Macao 999078, China
M
Mengchu Zhou
Department of Electrical and Computer Engineering, New Jersey Institute of Technology, Newark, NJ 07102 USA
Li Feng
Li Feng
Associate Professor of Radiology & Director of Rapid Imaging, NYU Grossman School of Medicine
Magnetic Resonance ImagingImage Reconstruction