๐ค AI Summary
This work addresses the challenge of efficiently estimating recursively nested expectations (RNEs) with constant depth, which arise in applications such as optimal stopping. The authors propose a novel approach that integrates quantum algorithms with a derandomized multilevel Monte Carlo (rMLMC) method. By extending quantum speedup from single-level to multi-level nested expectations and introducing a new derandomization strategy to circumvent variable runtime issues, the algorithm achieves a computational cost of ร(ฮตโปยน) to attain target accuracy ฮต. This complexity nearly matches the theoretical lower bound and offers an almost quadratic speedup over classical methods, substantially enhancing the efficiency of estimating multi-level nested expectations.
๐ Abstract
We study the estimation of repeatedly nested expectations (RNEs) with a constant horizon (number of nestings) using quantum computing. We propose a quantum algorithm that achieves $\varepsilon$-error with cost $\tilde O(\varepsilon^{-1})$, up to logarithmic factors. Standard lower bounds show this scaling is essentially optimal, yielding an almost quadratic speedup over the best classical algorithm. Our results extend prior quantum speedups for single nested expectations to repeated nesting, and therefore cover a broader range of applications, including optimal stopping. This extension requires a new derandomized variant of the classical randomized Multilevel Monte Carlo (rMLMC) algorithm. Careful de-randomization is key to overcoming a variable-time issue that typically increases quantized versions of classical randomized algorithms.