Sublinear Data Structures for Nearest Neighbor in Ultra High Dimensions

📅 2025-03-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In ultra-high-dimensional settings (d ≫ n), conventional approximate nearest neighbor (ANN) data structures suffer from linear dependence on dimension d in both space and query time, severely limiting scalability. Method: We propose the first (1+ε)-ANN data structure achieving sublinear space Õ(n log d / poly(ε)) and sublinear query time Õ(n / poly(ε)) in the d ≫ n regime. Our approach integrates sparse coordinate sampling, randomized projection, and hierarchical clustering, underpinned by refined ℓ₁/ℓ₂ distance analysis and polylogarithmic compression. Contribution: We provide tight theoretical bounds on both space and time complexity and extend the result to general ℓₚ-norms for the first time. The structure enables multiple sublinear geometric primitives—including approximate orthogonal range searching, farthest neighbor retrieval, and characterization of k-means/k-median clusterings—thereby breaking the long-standing dimension-dependent bottleneck in high-dimensional geometric indexing.

Technology Category

Application Category

📝 Abstract
Geometric data structures have been extensively studied in the regime where the dimension is much smaller than the number of input points. But in many scenarios in Machine Learning, the dimension can be much higher than the number of points and can be so high that the data structure might be unable to read and store all coordinates of the input and query points. Inspired by these scenarios and related studies in feature selection and explainable clustering, we initiate the study of geometric data structures in this ultra-high dimensional regime. Our focus is the {em approximate nearest neighbor} problem. In this problem, we are given a set of $n$ points $Csubseteq mathbb{R}^d$ and have to produce a {em small} data structure that can {em quickly} answer the following query: given $qin mathbb{R}^d$, return a point $cin C$ that is approximately nearest to $q$. The main question in this paper is: {em Is there a data structure with sublinear ($o(nd)$) space and sublinear ($o(d)$) query time when $dgg n$?} In this paper, we answer this question affirmatively. We present $(1+epsilon)$-approximation data structures with the following guarantees. For $ell_1$- and $ell_2$-norm distances: $ ilde O(n log(d)/mathrm{poly}(epsilon))$ space and $ ilde O(n/mathrm{poly}(epsilon))$ query time. We show that these space and time bounds are tight up to $mathrm{poly}{(log n/epsilon)}$ factors. For $ell_p$-norm distances: $ ilde O(n^2 log(d) (loglog (n)/epsilon)^p)$ space and $ ilde Oleft(n(loglog (n)/epsilon)^p ight)$ query time. Via simple reductions, our data structures imply sublinear-in-$d$ data structures for some other geometric problems; e.g. approximate orthogonal range search, furthest neighbor, and give rise to a sublinear $O(1)$-approximate representation of $k$-median and $k$-means clustering.
Problem

Research questions and friction points this paper is trying to address.

Develop sublinear space data structures for ultra-high dimensions.
Achieve sublinear query time for approximate nearest neighbor search.
Provide tight space and time bounds for various norm distances.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Sublinear space data structures for high dimensions
Approximate nearest neighbor with efficient query time
Tight space-time bounds for various norm distances