🤖 AI Summary
This work systematically evaluates the impact of three canonical n-gram selection strategies on regex indexing performance across five modern text analytics tasks—including real-time log processing and genomic sequence analysis—measuring indexing time, storage overhead, false-positive rate, and end-to-end query latency. It presents the first large-scale, empirical comparison conducted within a unified, open-source framework, revealing performance inflection points and applicability boundaries of traditional strategies under contemporary high-dimensional, streaming, and long-pattern workloads. Methodologically, it introduces a standardized, reproducible benchmarking infrastructure that rigorously controls data characteristics and query distributions. Key contributions include: (1) a reusable, modular evaluation framework; (2) quantitative characterization of trade-offs among strategies across diverse data and workload dimensions; and (3) open-sourcing of full implementations and curated benchmark datasets. The study delivers empirically grounded design guidelines and a reproducible baseline for building efficient, scalable regex indexes.
📝 Abstract
Efficient evaluation of regular expressions (regex, for short) is crucial for text analysis, and n-gram indexes are fundamental to achieving fast regex evaluation performance. However, these indexes face scalability challenges because of the exponential number of possible n-grams that must be indexed. Many existing selection strategies, developed decades ago, have not been rigorously evaluated on contemporary large-scale workloads and lack comprehensive performance comparisons. Therefore, a unified and comprehensive evaluation framework is necessary to compare these methods under the same experimental settings. This paper presents the first systematic evaluation of three representative n-gram selection strategies across five workloads, including real-time production logs and genomic sequence analysis. We examine their trade-offs in terms of index construction time, storage overhead, false positive rates, and end-to-end query performance. Through empirical results, this study provides a modern perspective on existing n-gram based regular expression evaluation methods, extensive observations, valuable discoveries, and an adaptable testing framework to guide future research in this domain. We make our implementations of these methods and our test framework available as open-source at https://github.com/mush-zhang/RegexIndexComparison.