🤖 AI Summary
Existing systems that integrate large language models (LLMs) for semantic querying either rely on inefficient DataFrame primitives or employ SQL user-defined functions (UDFs) isolated from the query optimizer, resulting in poor performance and high user burden. Moreover, the high cost and non-determinism of LLMs render traditional optimization techniques ineffective. This work proposes Sema, a high-performance semantic query engine built on DuckDB, which is the first to deeply integrate LLM operators into the database optimizer. Sema introduces SemaSQL, a declarative language enabling natural language expressions within SQL, and supports end-to-end optimization through novel techniques including logical-level semantic compression and constraint derivation, as well as runtime adaptive execution strategies such as operator reordering, semantic fusion, and prompt batching. Experiments across 20 classification, summarization, and extraction tasks show that Sema achieves 2–10× speedups over three baseline systems while maintaining competitive result quality.
📝 Abstract
The integration of Large Language Models (LLMs) into data analytics has unlocked powerful capabilities for reasoning over bulk structured and unstructured data. However, existing systems typically rely on either DataFrame primitives, which lack the efficient execution infrastructure of modern DBMSs, or SQL User-Defined Functions (UDFs), which isolate semantic logic from the query optimizer and burden users with implementation complexities. The LLM-powered semantic operators also bring new challenges due to the high cost and non-deterministic nature of LLM invocation, where conventional optimization rules and cost models are inapplicable for their optimization.
To bridge these gaps, we present Sema, a high-performance semantic query engine built on DuckDB that treats LLM-powered semantic operators as first-class citizens. Sema introduces SemaSQL, a declarative dialect that allows users seamlessly inject natural language expressions into standard SQL clauses, enabling end-to-end optimization and execution. At the logical level, the optimizer of Sema compresses natural language expressions and deduces relational constraints from semantic operators. At runtime, Sema employs Adaptive Query Execution (AQE) to dynamically reorder operators, fuse semantic operations, and apply prompt batching. This approach seeks a Pareto-optimal execution path balancing token consumption and latency under accuracy constraints. We evaluate Sema on 20 semantic queries across classification, summarization, and extraction tasks. Experimental results demonstrate that Sema achieves $2-10 \times$ speedup against three baseline systems while achieving competitive result quality.