🤖 AI Summary
Variational quantum classifiers suffer from degraded classification performance on complex data (e.g., images) due to suboptimal input state distributions under standard amplitude encoding.
Method: We propose a triplet-loss-based quantum encoding scheme—the first application of classical face recognition’s triplet loss in quantum machine learning. It optimizes amplitude-encoded quantum input states to tighten intra-class clustering and enlarge inter-class separation in Hilbert space, using the average trace distance between density matrices as a differentiability-aware separability metric, enabling efficient training on shallow-depth variational circuits.
Results: On multiple binary-classification tasks from MNIST and MedMNIST, our method achieves significantly higher accuracy than conventional amplitude encoding while reducing circuit depth by 30–50%. This demonstrates simultaneous improvements in both resource efficiency and generalization capability.
📝 Abstract
An efficient and data-driven encoding scheme is proposed to enhance the performance of variational quantum classifiers. This encoding is specially designed for complex datasets like images and seeks to help the classification task by producing input states that form well-separated clusters in the Hilbert space according to their classification labels. The encoding circuit is trained using a triplet loss function inspired by classical facial recognition algorithms, and class separability is measured via average trace distances between the encoded density matrices. Benchmark tests performed on various binary classification tasks on MNIST and MedMNIST datasets demonstrate considerable improvement over amplitude encoding with the same VQC structure while requiring a much lower circuit depth.