π€ AI Summary
On early fault-tolerant quantum computers with scarce qubits, classical matrix block-encoding faces a fundamental trade-off among circuit size, normalization factor, and classical computational overhead.
Method: This paper proposes Binary Tree Block-Encoding (BITBLE), the first block-encoding protocol to jointly optimize circuit depth, normalization factor, and timeβspace complexity of classical parameter computation. BITBLE requires only O(1) ancillary qubits and constructs decoupled unitaries in O(n2^{2n}) classical preprocessing time and Ξ(2^{2n}) memory. Its core techniques include binary-tree divide-and-conquer encoding, efficient circuit synthesis, and joint optimization of normalization factors.
Results: Experiments demonstrate that BITBLE significantly improves resource-efficiency trade-offs and classical preprocessing speed under the size-metric benchmark. All algorithms are open-sourced and support plug-and-play quantum data loading.
π Abstract
Block-encoding is a critical subroutine in quantum computing, enabling the transformation of classical data into a matrix representation within a quantum circuit. The resource trade-offs in simulating a block-encoding can be quantified by the circuit size, the normalization factor, and the time and space complexity of parameter computation. Previous studies have primarily focused either on the time and memory complexity of computing the parameters, or on the circuit size and normalization factor in isolation, often neglecting the balance between these trade-offs. In early fault-tolerant quantum computers, the number of qubits is limited. For a classical matrix of size $2^{n} imes 2^{n}$, our approach not only improves the time of decoupling unitary for block-encoding with time complexity $mathcal{O}(n2^{2n})$ and memory complexity $Theta(2^{2n})$ using only a few ancilla qubits, but also demonstrates superior resource trade-offs. Our proposed block-encoding protocol is named Binary Tree Block-encoding ( exttt{BITBLE}). Under the benchmark, extit{size metric}, defined by the product of the number of gates and the normalization factor, numerical experiments demonstrate the improvement of both resource trade-off and classical computing time efficiency of the exttt{BITBLE} protocol. The algorithms are all open-source.