Approximating Opaque Top-k Queries

📅 2025-03-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Efficiently answering opaque top-k queries under black-box scoring functions (e.g., fuzzy models) remains challenging due to the absence of structural assumptions and high query cost. Method: We propose a task-agnostic hierarchical vector index coupled with a novel heavy-tailed-aware submodular ε-greedy bandit algorithm. Our approach builds a lightweight hierarchical clustering index for fast pruning and employs histogram-based score distribution modeling, combined with a diminishing-returns bandit policy for active sampling—guaranteeing constant-factor approximation optimality. Contribution/Results: This work introduces the first general-purpose hierarchical indexing framework for opaque top-k retrieval and the first sampling strategy jointly leveraging submodularity and heavy-tailed score distributions. Experiments on image, tabular, and synthetic datasets demonstrate up to 10× speedup over exhaustive scanning and significant improvements over state-of-the-art sampling baselines, achieving high-quality approximate top-k results with minimal query overhead.

Technology Category

Application Category

📝 Abstract
Combining query answering and data science workloads has become prevalent. An important class of such workloads is top-k queries with a scoring function implemented as an opaque UDF - a black box whose internal structure and scores on the search domain are unavailable. Some typical examples include costly calls to fuzzy classification and regression models. The models may also be changed in an ad-hoc manner. Since the algorithm does not know the scoring function's behavior on the input data, opaque top-k queries become expensive to evaluate exactly or speed up by indexing. Hence, we propose an approximation algorithm for opaque top-k query answering. Our proposed solution is a task-independent hierarchical index and a novel bandit algorithm. The index clusters elements by some cheap vector representation then builds a tree of the clusters. Our bandit is a diminishing returns submodular epsilon-greedy bandit algorithm that maximizes the sum of the solution set's scores. Our bandit models the distribution of scores in each arm using a histogram, then targets arms with fat tails. We prove that our bandit algorithm approaches a constant factor of the optimal algorithm. We evaluate our standalone library on large synthetic, image, and tabular datasets over a variety of scoring functions. Our method accelerates the time required to achieve nearly optimal scores by up to an order of magnitude compared to exhaustive scan while consistently outperforming baseline sampling algorithms.
Problem

Research questions and friction points this paper is trying to address.

Approximates opaque top-k queries with unknown scoring functions
Proposes hierarchical index and bandit algorithm for efficiency
Accelerates query processing while maintaining near-optimal results
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hierarchical index for clustering elements
Submodular epsilon-greedy bandit algorithm
Histogram-based score distribution modeling
🔎 Similar Papers
No similar papers found.