Structured Unitary Tensor Network Representations for Circuit-Efficient Quantum Data Encoding

📅 2026-02-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the bottleneck in quantum machine learning caused by the high circuit depth and resource demands of conventional quantum data encoding schemes. To overcome this limitation, the authors propose TNQE, a novel framework that represents classical data using structured unitary tensor networks. By incorporating block-wise unitary parameterization and unitarity-aware constraints, TNQE enables end-to-end trainable and highly efficient encoding. Coupled with a differentiable quantum circuit compilation strategy, the method substantially reduces both circuit depth and qubit overhead. Experimental results demonstrate that TNQE can process high-resolution 256×256 images using only 4% of the circuit depth required by amplitude encoding, while maintaining feasibility on real quantum hardware.

Technology Category

Application Category

📝 Abstract
Encoding classical data into quantum states is a central bottleneck in quantum machine learning: many widely used encodings are circuit-inefficient, requiring deep circuits and substantial quantum resources, which limits scalability on quantum hardware. In this work, we propose TNQE, a circuit-efficient quantum data encoding framework built on structured unitary tensor network (TN) representations. TNQE first represents each classical input via a TN decomposition and then compiles the resulting tensor cores into an encoding circuit through two complementary core-to-circuit strategies. To make this compilation trainable while respecting the unitary nature of quantum operations, we introduce a unitary-aware constraint that parameterizes TN cores as learnable block unitaries, enabling them to be directly optimized and directly encoded as quantum operators. The proposed TNQE framework enables explicit control over circuit depth and qubit resources, allowing the construction of shallow, resource-efficient circuits. Across a range of benchmarks, TNQE achieves encoding circuits as shallow as $0.04\times$ the depth of amplitude encoding, while naturally scaling to high-resolution images ($256 \times 256$) and demonstrating practical feasibility on real quantum hardware.
Problem

Research questions and friction points this paper is trying to address.

quantum data encoding
circuit efficiency
quantum machine learning
scalability
quantum resources
Innovation

Methods, ideas, or system contributions that make the work stand out.

unitary tensor network
quantum data encoding
circuit-efficient
trainable quantum circuits
tensor-to-circuit compilation