🤖 AI Summary
This paper establishes computational lower bounds for generating permutations statistically close to the uniform distribution over ([n]!) from a uniformly random input sequence over ([n]), under the constraint that each output bit queries the input at most (d) times—either adaptively or non-adaptively. Using information-theoretic analysis, probabilistic methods, and the cell-probe model—augmented by structural constraints on query patterns—the authors derive the first nontrivial lower bounds: (d geq (log n)^{Omega(1)}) in the adaptive setting and (d geq n^{Omega(1)}) in the non-adaptive setting. These results improve upon prior bounds by exponential factors and fully resolve Viola’s long-standing conjecture on this problem. The derived bounds are asymptotically tight and immediately imply matching lower bounds for succinct data structures supporting permutation storage and retrieval.
📝 Abstract
Suppose we are given an infinite sequence of input cells, each initialized with a uniform random symbol from $[n]$. How hard is it to output a sequence in $[n]^n$ that is close to a uniform random permutation? Viola (SICOMP 2020) conjectured that if each output cell is computed by making $d$ probes to input cells, then $dgeqomega(1)$. Our main result shows that, in fact, $dgeq (log n)^{Omega(1)}$, which is tight up to the constant in the exponent. Our techniques also show that if the probes are nonadaptive, then $dgeq n^{Omega(1)}$, which is an exponential improvement over the previous nonadaptive lower bound due to Yu and Zhan (ITCS 2024). Our results also imply lower bounds against succinct data structures for storing permutations.