SoftMatcha 2: A Fast and Soft Pattern Matcher for Trillion-Scale Corpora

📅 2026-02-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the combinatorial explosion in soft matching caused by semantic variations—such as substitutions, insertions, and deletions—within trillion-scale corpora. To tackle this challenge, the authors propose a suffix-array-based scalable search framework that integrates disk-aware indexing with a dynamic pruning mechanism guided by corpus-level statistical properties. This approach effectively controls search complexity while preserving semantic flexibility. Evaluated on the 1.4T-token FineWeb-Edu dataset, the system achieves sub-second query latency (<0.3 seconds), substantially outperforming baselines like infini-gram, and successfully uncovers benchmark contamination cases missed by prior methods. The accompanying platform supports online soft-matching demonstrations across seven languages, marking the first efficient semantic soft search capability at trillion-scale corpus size.

Technology Category

Application Category

📝 Abstract
We present an ultra-fast and flexible search algorithm that enables search over trillion-scale natural language corpora in under 0.3 seconds while handling semantic variations (substitution, insertion, and deletion). Our approach employs string matching based on suffix arrays that scales well with corpus size. To mitigate the combinatorial explosion induced by the semantic relaxation of queries, our method is built on two key algorithmic ideas: fast exact lookup enabled by a disk-aware design, and dynamic corpus-aware pruning. We theoretically show that the proposed method suppresses exponential growth in the search space with respect to query length by leveraging statistical properties of natural language. In experiments on FineWeb-Edu (Lozhkov et al., 2024) (1.4T tokens), we show that our method achieves significantly lower search latency than existing methods: infini-gram (Liu et al., 2024), infini-gram mini (Xu et al., 2025), and SoftMatcha (Deguchi et al., 2025). As a practical application, we demonstrate that our method identifies benchmark contamination in training corpora, unidentified by existing approaches. We also provide an online demo of fast, soft search across corpora in seven languages.
Problem

Research questions and friction points this paper is trying to address.

soft pattern matching
trillion-scale corpora
semantic variations
fast search
natural language corpora
Innovation

Methods, ideas, or system contributions that make the work stand out.

soft pattern matching
suffix arrays
disk-aware design
corpus-aware pruning
trillion-scale search
🔎 Similar Papers
No similar papers found.