🤖 AI Summary
This paper studies the $k$-center clustering problem on binary vectors with missing entries: given incomplete binary data, partition entities into $k$ clusters and select binary centers to minimize the maximum Hamming distance from each entity to its cluster center. The problem is NP-hard; however, we establish for the first time that structural properties of the missingness pattern—such as vertex cover number, fracture number, and treewidth—enable fixed-parameter tractability (FPT). Leveraging a row-column graph representation and parameterized complexity analysis, we design FPT algorithms for multiple structured sparsity parameters. Furthermore, we uncover an intrinsic connection between the single-cluster case and the computational complexity of integer linear programming (ILP), proving that improvements to the Closest String algorithm directly enhance the performance of general-purpose ILP solvers.
📝 Abstract
$kC$ clustering is a fundamental classification problem, where the task is to categorize the given collection of entities into $k$ clusters and come up with a representative for each cluster, so that the maximum distance between an entity and its representative is minimized. In this work, we focus on the setting where the entities are represented by binary vectors with missing entries, which model incomplete categorical data. This version of the problem has wide applications, from predictive analytics to bioinformatics. Our main finding is that the problem, which is notoriously hard from the classical complexity viewpoint, becomes tractable as soon as the known entries are sparse and exhibit a certain structure. Formally, we show fixed-parameter tractable algorithms for the parameters vertex cover, fracture number, and treewidth of the row-column graph, which encodes the positions of the known entries of the matrix. Additionally, we tie the complexity of the 1-cluster variant of the problem, which is famous under the name Closest String, to the complexity of solving integer linear programs with few constraints. This implies, in particular, that improving upon the running times of our algorithms would lead to more efficient algorithms for integer linear programming in general.