🤖 AI Summary
Semantic join based on large language models (LLMs) incurs prohibitively high computational cost due to exhaustive pairwise row-level inference.
Method: This paper proposes a low-overhead, high-fidelity semantic join framework. Its core innovation lies in leveraging LLMs to automatically extract salient semantic features from textual attributes and compile them into conjunctive normal form (CNF) logical expressions, enabling efficient pruning of non-matching row pairs. By integrating feature decomposition with lightweight semantic filtering, the method drastically reduces LLM invocation frequency.
Contribution/Results: The approach provides theoretically grounded quality guarantees—e.g., bounded recall and precision—while preserving statistical result fidelity. Extensive experiments on real-world datasets demonstrate that, at equivalent output quality, it achieves up to 10× reduction in inference cost compared to state-of-the-art methods.
📝 Abstract
Large Language Models (LLMs) are being increasingly used within data systems to process large datasets with text fields. A broad class of such tasks involves a semantic join-joining two tables based on a natural language predicate per pair of tuples, evaluated using an LLM. Semantic joins generalize tasks such as entity matching and record categorization, as well as more complex text understanding tasks. A naive implementation is expensive as it requires invoking an LLM for every pair of rows in the cross product. Existing approaches mitigate this cost by first applying embedding-based semantic similarity to filter candidate pairs, deferring to an LLM only when similarity scores are deemed inconclusive. However, these methods yield limited gains in practice, since semantic similarity may not reliably predict the join outcome. We propose Featurized-Decomposition Join (FDJ for short), a novel approach for performing semantic joins that significantly reduces cost while preserving quality. FDJ automatically extracts features and combines them into a logical expression in conjunctive normal form that we call a featurized decomposition to effectively prune out non-matching pairs. A featurized decomposition extracts key information from text records and performs inexpensive comparisons on the extracted features. We show how to use LLMs to automatically extract reliable features and compose them into logical expressions while providing statistical guarantees on the output result-an inherently challenging problem due to dependencies among features. Experiments on real-world datasets show up to 10 times reduction in cost compared with the state-of-the-art while providing the same quality guarantees.