Learning Filter-Aware Distance Metrics for Nearest Neighbor Search with Multiple Filters

📅 2025-11-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing graph-based methods for approximate nearest neighbor (ANN) search under multi-label filtering employ fixed, data-agnostic penalty schemes, resulting in poor generalization. Method: We propose a filter-aware distance metric learning framework that jointly models label filtering constraints and vector distance metrics for the first time. Through constrained linear optimization, it automatically learns label-vector joint weights from data, enabling graph index structures to better align with semantic distributions. The learned filter-aware weights are dynamically integrated during both graph-based index construction and query processing. Contribution/Results: Experiments on multiple benchmark datasets demonstrate a 5–10% improvement in retrieval accuracy over state-of-the-art baselines using fixed penalties. Our approach exhibits superior cross-distribution generalization and adaptability, particularly under diverse label-filtering conditions.

Technology Category

Application Category

📝 Abstract
Filtered Approximate Nearest Neighbor (ANN) search retrieves the closest vectors for a query vector from a dataset. It enforces that a specified set of discrete labels $S$ for the query must be included in the labels of each retrieved vector. Existing graph-based methods typically incorporate filter awareness by assigning fixed penalties or prioritizing nodes based on filter satisfaction. However, since these methods use fixed, data in- dependent penalties, they often fail to generalize across datasets with diverse label and vector distributions. In this work, we propose a principled alternative that learns the optimal trade-off between vector distance and filter match directly from the data, rather than relying on fixed penalties. We formulate this as a constrained linear optimization problem, deriving weights that better reflect the underlying filter distribution and more effectively address the filtered ANN search problem. These learned weights guide both the search process and index construction, leading to graph structures that more effectively capture the underlying filter distribution and filter semantics. Our experiments demonstrate that adapting the distance function to the data significantly im- proves accuracy by 5-10% over fixed-penalty methods, providing a more flexible and generalizable framework for the filtered ANN search problem.
Problem

Research questions and friction points this paper is trying to address.

Learning data-dependent trade-offs between vector distance and filter satisfaction
Addressing generalization issues in filtered nearest neighbor search
Optimizing graph structures to capture underlying filter distributions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Learns optimal trade-off between vector distance and filter match
Formulates constrained linear optimization for weight derivation
Learned weights guide both search process and index construction
🔎 Similar Papers
No similar papers found.
A
Ananya Sutradhar
Microsoft Research India
S
Suryansh Gupta
Microsoft Research India
Ravishankar Krishnaswamy
Ravishankar Krishnaswamy
Microsoft Research
Algorithms
H
Haiyang Xu
Microsoft Corporation
Aseem Rastogi
Aseem Rastogi
Microsoft Research India
G
Gopal Srinivasa
Microsoft Research India