🤖 AI Summary
Existing semantic search methods struggle to simultaneously handle orthographic variation, paraphrastic generalization, and millisecond-scale latency: dense vector retrieval operates at coarse granularity and often retrieves semantically irrelevant passages, while traditional string matching lacks semantic flexibility. This paper introduces EmbedIndex—the first semantic pattern-matching framework that embeds a relaxed word-embedding similarity mechanism directly into an inverted index, unifying soft semantic matching with hard indexing. Our approach supports robust cross-lingual retrieval (English, Japanese, Latin) and excels on highly inflected languages by integrating lightweight dense retrieval with an optimized inverted-index structure. Evaluated on billion-scale corpora, EmbedIndex achieves sub-second response times, significantly improving accuracy in harmful semantic detection and linguistic analysis—particularly for morphologically rich languages like Latin. The framework and its accompanying open-source web toolchain are publicly released.
📝 Abstract
Researchers and practitioners in natural language processing and computational linguistics frequently observe and analyze the real language usage in large-scale corpora. For that purpose, they often employ off-the-shelf pattern-matching tools, such as grep, and keyword-in-context concordancers, which is widely used in corpus linguistics for gathering examples. Nonetheless, these existing techniques rely on surface-level string matching, and thus they suffer from the major limitation of not being able to handle orthographic variations and paraphrasing -- notable and common phenomena in any natural language. In addition, existing continuous approaches such as dense vector search tend to be overly coarse, often retrieving texts that are unrelated but share similar topics. Given these challenges, we propose a novel algorithm that achieves emph{soft} (or semantic) yet efficient pattern matching by relaxing a surface-level matching with word embeddings. Our algorithm is highly scalable with respect to the size of the corpus text utilizing inverted indexes. We have prepared an efficient implementation, and we provide an accessible web tool. Our experiments demonstrate that the proposed method (i) can execute searches on billion-scale corpora in less than a second, which is comparable in speed to surface-level string matching and dense vector search; (ii) can extract harmful instances that semantically match queries from a large set of English and Japanese Wikipedia articles; and (iii) can be effectively applied to corpus-linguistic analyses of Latin, a language with highly diverse inflections.