Feature Entanglement-based Quantum Multimodal Fusion Neural Network

📅 2026-01-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work proposes a quantum entanglement–based multimodal fusion neural network to address the longstanding challenge of balancing accuracy, interpretability, and model complexity in classical multimodal fusion approaches. By introducing quantum entanglement mechanisms into multimodal fusion for the first time, the architecture integrates classical feedforward networks, an interpretable quantum fusion block, and a quantum convolutional neural network (QCNN). This design maintains decision-level interpretability while reducing model complexity to linear scale and enhancing representational capacity. Experimental results demonstrate that the proposed model achieves classification accuracy comparable to large-scale classical networks on multimodal image datasets, using significantly fewer parameters, and exhibits superior stability and performance.

Technology Category

Application Category

📝 Abstract
Multimodal learning aims to enhance perceptual and decision-making capabilities by integrating information from diverse sources. However, classical deep learning approaches face a critical trade-off between the high accuracy of black-box feature-level fusion and the interpretability of less outstanding decision-level fusion, alongside the challenges of parameter explosion and complexity. This paper discusses the accuracy-interpretablity-complexity dilemma under the quantum computation framework and propose a feature entanglement-based quantum multimodal fusion neural network. The model is composed of three core components: a classical feed-forward module for unimodal processing, an interpretable quantum fusion block, and a quantum convolutional neural network (QCNN) for deep feature extraction. By leveraging the strong expressive power of quantum, we have reduced the complexity of multimodal fusion and post-processing to linear, and the fusion process also possesses the interpretability of decision-level fusion. The simulation results demonstrate that our model achieves classification accuracy comparable to classical networks with dozens of times of parameters, exhibiting notable stability and performance across multimodal image datasets.
Problem

Research questions and friction points this paper is trying to address.

multimodal fusion
accuracy-interpretability trade-off
parameter explosion
model complexity
quantum computation
Innovation

Methods, ideas, or system contributions that make the work stand out.

quantum multimodal fusion
feature entanglement
interpretable quantum neural network
quantum convolutional neural network
accuracy-interpretability-complexity trade-off
🔎 Similar Papers
No similar papers found.
Yu Wu
Yu Wu
University of Cambridge
machine learninghealth sensingmobile health
Q
Qianli Zhou
School of Electronics and Information, Northwestern Polytechnical University, Xi’an, 710072, China
J
Jie Geng
School of Electronics and Information, Northwestern Polytechnical University, Xi’an, 710072, China
X
Xinyang Deng
School of Electronics and Information, Northwestern Polytechnical University, Xi’an, 710072, China
W
Wen Jiang
School of Electronics and Information, Northwestern Polytechnical University, Xi’an, 710072, China