π€ AI Summary
This paper studies the *k*-center clustering problem with *set outliers*: given a dataset, a family of candidate outlier subsets, and a budget *z*, the goal is to remove at most *z* subsets (rather than individual points) and select *k* centers to minimize the maximum distance from any remaining point to its nearest center. This model better captures structured noise in databasesβe.g., malfunctioning sensors or corrupted data sources. We introduce the first formal framework for set outliers and establish its theoretical connection to relational queries. We design a triple-approximation algorithm, proving its tight approximation ratio and computational hardness lower bound in general metric spaces. For geometric settings, we achieve near-linear-time computation by integrating range trees and BBD trees, and introduce a frequency parameter *f* to construct a small coreset. Our algorithm yields an *O*(1)-approximate solution using at most 2*k* centers and at most 2*fz* outlier subsets; when *f* = 1, runtime improves significantly.
π Abstract
We introduce and study the $k$-center clustering problem with set outliers, a natural and practical generalization of the classical $k$-center clustering with outliers. Instead of removing individual data points, our model allows discarding up to $z$ subsets from a given family of candidate outlier sets $mathcal{H}$. Given a metric space $(P,mathsf{dist})$, where $P$ is a set of elements and $mathsf{dist}$ a distance metric, a family of sets $mathcal{H}subseteq 2^P$, and parameters $k, z$, the goal is to compute a set of $k$ centers $Ssubseteq P$ and a family of $z$ sets $Hsubseteq mathcal{H}$ to minimize $max_{pin Psetminus(igcup_{hin H} h)} min_{sin S}mathsf{dist}(p,s)$. This abstraction captures structured noise common in database applications, such as faulty data sources or corrupted records in data integration and sensor systems.
We present the first approximation algorithms for this problem in both general and geometric settings. Our methods provide tri-criteria approximations: selecting up to $2k$ centers and $2f z$ outlier sets (where $f$ is the maximum number of sets that a point belongs to), while achieving $O(1)$-approximation in clustering cost. In geometric settings, we leverage range and BBD trees to achieve near-linear time algorithms. In many real applications $f=1$. In this case we further improve the running time of our algorithms by constructing small emph{coresets}. We also provide a hardness result for the general problem showing that it is unlikely to get any sublinear approximation on the clustering cost selecting less than $fcdot z$ outlier sets.
We demonstrate that this model naturally captures relational clustering with outliers: outliers are input tuples whose removal affects the join output. We provide approximation algorithms for both, establishing a tight connection between robust clustering and relational query evaluation.