🤖 AI Summary
Existing knowledge-base AI systems face a fundamental trade-off: LLM-based operations tuned empirically lack formal accuracy guarantees, while rigid line-by-line processing sacrifices expressiveness and robustness. This paper introduces Semantic Operators—the first declarative AI data transformation model designed for natural language specifications, supporting generic semantic operations including filtering, joining, grouping, and sorting. Our contributions are threefold: (1) a formal semantic operator model grounded in logic; (2) behavior definition and accuracy guarantees via gold-standard algorithm alignment; and (3) accuracy-aware cost modeling and execution optimization, achieving up to 1000× speedup. Evaluated on fact-checking and biomedical classification tasks, our approach matches or exceeds state-of-the-art accuracy—improving by up to 170%—while delivering 3.6× higher throughput than baseline systems. The open-source system LOTUS implements this framework.
📝 Abstract
The semantic capabilities of large language models (LLMs) have the potential to enable rich analytics and reasoning over vast knowledge corpora. Unfortunately, existing systems either empirically optimize expensive LLM-powered operations with no performance guarantees, or serve a limited set of row-wise LLM operations, providing limited robustness, expressiveness and usability. We introduce semantic operators, the first formalism for declarative and general-purpose AI-based transformations based on natural language specifications (e.g., filtering, sorting, joining or aggregating records using natural language criteria). Each operator opens a rich space for execution plans, similar to relational operators. Our model specifies the expected behavior of each operator with a high-quality gold algorithm, and we develop an optimization framework that reduces cost, while providing accuracy guarantees with respect to a gold algorithm. Using this approach, we propose several novel optimizations to accelerate semantic filtering, joining, group-by and top-k operations by up to $1,000 imes$. We implement semantic operators in the LOTUS system and demonstrate LOTUS' effectiveness on real, bulk-semantic processing applications, including fact-checking, biomedical multi-label classification, search, and topic analysis. We show that the semantic operator model is expressive, capturing state-of-the-art AI pipelines in a few operator calls, and making it easy to express new pipelines that match or exceed quality of recent LLM-based analytic systems by up to $170%$, while offering accuracy guarantees. Overall, LOTUS programs match or exceed the accuracy of state-of-the-art AI pipelines for each task while running up to $3.6 imes$ faster than the highest-quality baselines. LOTUS is publicly available at https://github.com/lotus-data/lotus.