SoftMatcha: A Soft and Fast Pattern Matcher for Billion-Scale Corpus Searches

📅 2025-03-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing semantic search methods struggle to simultaneously handle orthographic variation, paraphrastic generalization, and millisecond-scale latency: dense vector retrieval operates at coarse granularity and often retrieves semantically irrelevant passages, while traditional string matching lacks semantic flexibility. This paper introduces EmbedIndex—the first semantic pattern-matching framework that embeds a relaxed word-embedding similarity mechanism directly into an inverted index, unifying soft semantic matching with hard indexing. Our approach supports robust cross-lingual retrieval (English, Japanese, Latin) and excels on highly inflected languages by integrating lightweight dense retrieval with an optimized inverted-index structure. Evaluated on billion-scale corpora, EmbedIndex achieves sub-second response times, significantly improving accuracy in harmful semantic detection and linguistic analysis—particularly for morphologically rich languages like Latin. The framework and its accompanying open-source web toolchain are publicly released.

Technology Category

Application Category

📝 Abstract
Researchers and practitioners in natural language processing and computational linguistics frequently observe and analyze the real language usage in large-scale corpora. For that purpose, they often employ off-the-shelf pattern-matching tools, such as grep, and keyword-in-context concordancers, which is widely used in corpus linguistics for gathering examples. Nonetheless, these existing techniques rely on surface-level string matching, and thus they suffer from the major limitation of not being able to handle orthographic variations and paraphrasing -- notable and common phenomena in any natural language. In addition, existing continuous approaches such as dense vector search tend to be overly coarse, often retrieving texts that are unrelated but share similar topics. Given these challenges, we propose a novel algorithm that achieves emph{soft} (or semantic) yet efficient pattern matching by relaxing a surface-level matching with word embeddings. Our algorithm is highly scalable with respect to the size of the corpus text utilizing inverted indexes. We have prepared an efficient implementation, and we provide an accessible web tool. Our experiments demonstrate that the proposed method (i) can execute searches on billion-scale corpora in less than a second, which is comparable in speed to surface-level string matching and dense vector search; (ii) can extract harmful instances that semantically match queries from a large set of English and Japanese Wikipedia articles; and (iii) can be effectively applied to corpus-linguistic analyses of Latin, a language with highly diverse inflections.
Problem

Research questions and friction points this paper is trying to address.

Handles orthographic variations and paraphrasing in natural language.
Achieves efficient semantic pattern matching using word embeddings.
Scales to billion-scale corpora with fast search execution.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses word embeddings for semantic pattern matching
Scales efficiently with inverted indexes
Executes billion-scale searches in seconds
🔎 Similar Papers
No similar papers found.