🤖 AI Summary
This study addresses the problem of goal-oriented optimization by an agent interacting with an environment exhibiting hidden memory governed by unknown quantum channel dynamics, under partial probabilistic feedback. To this end, the work establishes the first reinforcement learning framework tailored to quantum processes with memory, employing sequential quantum instruments for environmental intervention and modeling measurements via positive operator-valued measures (POVMs) in a continuous action space. An adaptive strategy based on optimistic maximum likelihood estimation is proposed, integrating regret analysis with error propagation control to establish, for the first time, a precise correspondence between quantum channel estimation and thermodynamic dissipation. The algorithm achieves an $\tilde{O}(\sqrt{K})$ cumulative regret bound over $K$ interaction rounds and is shown to be information-theoretically optimal via reduction to a multi-armed quantum bandit setting. When applied to state-independent work extraction tasks, it attains asymptotically zero dissipation rate.
📝 Abstract
In reinforcement learning, an agent interacts sequentially with an environment to maximize a reward, receiving only partial, probabilistic feedback. This creates a fundamental exploration-exploitation trade-off: the agent must explore to learn the hidden dynamics while exploiting this knowledge to maximize its target objective. While extensively studied classically, applying this framework to quantum systems requires dealing with hidden quantum states that evolve via unknown dynamics. We formalize this problem via a framework where the environment maintains a hidden quantum memory evolving via unknown quantum channels, and the agent intervenes sequentially using quantum instruments. For this setting, we adapt an optimistic maximum-likelihood estimation algorithm. We extend the analysis to continuous action spaces, allowing us to model general positive operator-valued measures (POVMs). By controlling the propagation of estimation errors through quantum channels and instruments, we prove that the cumulative regret of our strategy scales as $\widetilde{\mathcal{O}}(\sqrt{K})$ over $K$ episodes. Furthermore, via a reduction to the multi-armed quantum bandit problem, we establish information-theoretic lower bounds demonstrating that this sublinear scaling is strictly optimal up to polylogarithmic factors. As a physical application, we consider state-agnostic work extraction. When extracting free energy from a sequence of non-i.i.d. quantum states correlated by a hidden memory, any lack of knowledge about the source leads to thermodynamic dissipation. In our setting, the mathematical regret exactly quantifies this cumulative dissipation. Using our adaptive algorithm, the agent uses past energy outcomes to improve its extraction protocol on the fly, achieving sublinear cumulative dissipation, and, consequently, an asymptotically zero dissipation rate.