Random Access in Grammar-Compressed Strings: Optimal Trade-Offs in Almost All Parameter Regimes

📅 2026-02-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the problem of efficient random access on strings compressed by straight-line programs (SLPs). Given a space budget, it introduces novel grammar transformation and compression contraction techniques, combined with information-theoretic lower bounds and deterministic construction algorithms, to establish nearly tight upper and lower bounds that match across almost all parameter regimes—encompassing string length \(n\), grammar size \(g\), alphabet size \(\sigma\), available space \(M\), and word size \(w\). The resulting data structure uses \(O(M)\) space and supports random access, substring extraction, rank, and select operations in time \(O\left(\frac{\log(n\log\sigma / Mw)}{\log(Mw / g\log n)}\right)\). This query time is shown to be unconditionally optimal over a broad range of parameter settings.

Technology Category

Application Category

📝 Abstract
A Random Access query to a string $T\in [0..\sigma)^n$ asks for the character $T[i]$ at a given position $i\in [0..n)$. In $O(n\log\sigma)$ bits of space, this fundamental task admits constant-time queries. While this is optimal in the worst case, much research has focused on compressible strings, hoping for smaller data structures that still admit efficient queries. We investigate the grammar-compressed setting, where $T$ is represented by a straight-line grammar. Our main result is a general trade-off that optimizes Random Access time as a function of string length $n$, grammar size (the total length of productions) $g$, alphabet size $\sigma$, data structure size $M$, and word size $w=\Omega(\log n)$ of the word RAM model. For any $M$ with $g\log n<Mw<n\log\sigma$, we show an $O(M)$-size data structure with query time $O(\frac{\log(n\log\sigma\,/\,Mw)}{\log(Mw\,/\,g\log n)})$. Remarkably, we also prove a matching unconditional lower bound that holds for all parameter regimes except very small grammars and relatively small data structures. Previous work focused on query time as a function of $n$ only, achieving $O(\log n)$ time using $O(g)$ space [Bille et al.; SIAM J. Comput. 2015] and $O(\frac{\log n}{\log \log n})$ time using $O(g\log^{\epsilon} n)$ space for any constant $\epsilon>0$ [Belazzougui et al.; ESA'15], [Ganardi, Je\.z, Lohrey; J. ACM 2021]. The only tight lower bound [Verbin and Yu; CPM'13] was $\Omega(\frac{\log n}{\log\log n})$ for $w=\Theta(\log n)$, $n^{\Omega(1)}\le g\le n^{1-\Omega(1)}$, and $M=g\log^{\Theta(1)}n$. In contrast, our result yields tight bounds in all relevant parameters and almost all regimes. Our data structure admits efficient deterministic construction. It relies on novel grammar transformations that generalize contracting grammars [Ganardi; ESA'21]. Beyond Random Access, its variants support substring extraction, rank, and select.
Problem

Research questions and friction points this paper is trying to address.

Random Access
Grammar Compression
Space-Time Trade-off
Straight-Line Grammar
Compressed Data Structures
Innovation

Methods, ideas, or system contributions that make the work stand out.

random access
grammar compression
time-space trade-off
unconditional lower bound
straight-line program
🔎 Similar Papers
No similar papers found.
A
Anouk Duyster
Max Planck Institute for Informatics, SIC, Saarbrücken, Germany; Saarbrücken Graduate School of Computer Sciences, SIC, Saarbrücken, Germany
Tomasz Kociumaka
Tomasz Kociumaka
Max Planck Institute for Informatics, Saarland Informatics Campus
algorithmsdata structuresstring algorithms