🤖 AI Summary
This work addresses the construction of secure and practical one-time memories (OTMs) in the quantum random oracle model. It proposes a novel scheme combining single-qubit Wiesner states with conjunction obfuscation based on the Learning Parity with Noise (LPN) assumption, achieving simulation-based security in the classical-access quantum random oracle model. The main contributions include the first integration of Wiesner states and conjunction obfuscation for OTM construction, the establishment of new bounds on positive operator-valued measure (POVM) distinguishability, and the introduction of an adaptive-depth quantum circuit security model that enables security analysis against bounded-depth quantum adversaries. This research provides both a theoretical foundation and a practical pathway toward realizing quantum-resistant one-time programs suitable for long-term storage.
📝 Abstract
We construct simulation-secure one-time memories (OTM) in the random oracle model, and present a plausible argument for their security against quantum adversaries with bounded and adaptive depth. Our contributions include: (1) A simple scheme where we use only single-qubit Wiesner states and conjunction obfuscation (constructible from LPN): no complex entanglement or quantum cryptography is required. (2) A new POVM bound where e prove that any measurement achieving $(1 - \epsilon)$ success on one basis has conjugate-basis guessing probability at most $\frac{1}{2m} + O(\epsilon^\frac{1}{4})$. (3) Simultation-secure OTMs in the quantum random oracle model where an adversary can only query the random oracle classically. (4) Adaptive depth security where, via an informal application of a lifting theorem from Arora et al., we conjecture security against adversaries with polynomial quantum circuit depth between random oracle queries. Security against adaptive, depth-bounded, quantum adversaries captures many realistic attacks on OTMs built from single-qubit states; our work thus paves the way for practical and truly secure one-time programs. Moreover, depth bounded adaptive adversarial models may allow for encoding one-time memories into error corrected memory states, opening the door to implementations of one-time programs which persist for long periods of time.