🤖 AI Summary
This work proposes a quantum entanglement–based multimodal fusion neural network to address the longstanding challenge of balancing accuracy, interpretability, and model complexity in classical multimodal fusion approaches. By introducing quantum entanglement mechanisms into multimodal fusion for the first time, the architecture integrates classical feedforward networks, an interpretable quantum fusion block, and a quantum convolutional neural network (QCNN). This design maintains decision-level interpretability while reducing model complexity to linear scale and enhancing representational capacity. Experimental results demonstrate that the proposed model achieves classification accuracy comparable to large-scale classical networks on multimodal image datasets, using significantly fewer parameters, and exhibits superior stability and performance.
📝 Abstract
Multimodal learning aims to enhance perceptual and decision-making capabilities by integrating information from diverse sources. However, classical deep learning approaches face a critical trade-off between the high accuracy of black-box feature-level fusion and the interpretability of less outstanding decision-level fusion, alongside the challenges of parameter explosion and complexity. This paper discusses the accuracy-interpretablity-complexity dilemma under the quantum computation framework and propose a feature entanglement-based quantum multimodal fusion neural network. The model is composed of three core components: a classical feed-forward module for unimodal processing, an interpretable quantum fusion block, and a quantum convolutional neural network (QCNN) for deep feature extraction. By leveraging the strong expressive power of quantum, we have reduced the complexity of multimodal fusion and post-processing to linear, and the fusion process also possesses the interpretability of decision-level fusion. The simulation results demonstrate that our model achieves classification accuracy comparable to classical networks with dozens of times of parameters, exhibiting notable stability and performance across multimodal image datasets.