100x Cost & Latency Reduction: Performance Analysis of AI Query Approximation using Lightweight Proxy Models

πŸ“… 2026-03-16
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the high cost and latency associated with invoking large language models (LLMs) for AI-powered queries in analytical and hybrid transactional/analytical processing (HTAP) databases, which hinder their practical deployment. To overcome these limitations, the authors propose a lightweight proxy model based on embedding vectors that systematically integrates semantic filtering (AI.IF) and semantic ranking (AI.RANK) operations into both OLAP (BigQuery) and HTAP (AlloyDB) architectures for the first time. The approach achieves over two orders of magnitude reduction in query cost and latency while maintaining or even slightly improving accuracy. Its effectiveness and scalability are validated on large-scale benchmarks, including datasets comprising tens of millions of Amazon reviews.

Technology Category

Application Category

πŸ“ Abstract
Several data warehouse and database providers have recently introduced extensions to SQL called AI Queries, enabling users to specify functions and conditions in SQL that are evaluated by LLMs, thereby broadening significantly the kinds of queries one can express over the combination of structured and unstructured data. LLMs offer remarkable semantic reasoning capabilities, making them an essential tool for complex and nuanced queries that blend structured and unstructured data. While extremely powerful, these AI queries can become prohibitively costly when invoked thousands of times. This paper provides an extensive evaluation of a recent AI query approximation approach that enables low cost analytics and database applications to benefit from AI queries. The approach delivers >100x cost and latency reduction for the semantic filter (AI.IF) operator and also important gains for semantic ranking (AI.RANK). The cost and performance gains come from utilizing cheap and accurate proxy models over embedding vectors. We show that despite the massive gains in latency and cost, these proxy models preserve accuracy and occasionally improve accuracy across various benchmark datasets, including the extended Amazon reviews benchmark that has 10M rows. We present an OLAP-friendly architecture within Google \textit{BigQuery} for this approach for purely online (ad hoc) queries, and a low-latency HTAP database-friendly architecture in \textit{AlloyDB} that could further improve the latency by moving the proxy model training offline. We present techniques that accelerate the proxy model training.
Problem

Research questions and friction points this paper is trying to address.

AI Queries
Cost Reduction
Latency Reduction
Large Language Models
Database Systems
Innovation

Methods, ideas, or system contributions that make the work stand out.

proxy models
AI queries
query approximation
embedding vectors
cost reduction
πŸ”Ž Similar Papers
Yeounoh Chung
Yeounoh Chung
Google
MLGen AIdata managementdata analyticsdatabase
R
Rushabh Desai
Google Cloud, Sunnyvale, USA
J
Jian He
Google Cloud, Sunnyvale, USA
Y
Yu Xiao
Google Cloud, Sunnyvale, USA
T
Thibaud Hottelier
Google Cloud, Sunnyvale, USA
Y
Yves-Laurent Kom Samo
Google Cloud, Sunnyvale, USA
P
Pushkar Kadilkar
Google Cloud, Sunnyvale, USA
X
Xianshun Chen
Google Cloud, Sunnyvale, USA
S
Sam Idicula
Google Cloud, Sunnyvale, USA
F
Fatma Γ–zcan
Google Cloud, Sunnyvale, USA
Alon Halevy
Alon Halevy
Google
Database systemsWeb data managementArtificial Intelligence
Y
Yannis Papakonstantinou
Google Cloud, Sunnyvale, USA