🤖 AI Summary
This paper addresses the inefficiency of approximating the Chamfer distance between high-dimensional point clouds. We propose the first $(1+varepsilon)$-approximation algorithm with time complexity $Oig(nd(loglog n + log(1/varepsilon))/varepsilon^2ig)$. Our method integrates efficient approximate nearest neighbor search, hierarchical random sampling, and geometric pruning, underpinned by a rigorous error-control analysis framework. When $varepsilon$ is constant, the logarithmic factor improves from $O(log n)$ to $O(loglog n)$, significantly narrowing the gap between the current upper bound and the theoretical lower bound $Omega(dn)$. This represents the first substantial reduction in that gap. Compared to the state-of-the-art result from NeurIPS 2023, our algorithm achieves substantially faster runtime, offering both stronger theoretical guarantees and improved practical performance for large-scale point cloud similarity measurement.
📝 Abstract
For two d-dimensional point sets A, B of size up to n, the Chamfer distance from A to B is defined as CH(A,B) = sum_{a in A} min_{b in B} |a-b|. The Chamfer distance is a widely used measure for quantifying dissimilarity between sets of points, used in many machine learning and computer vision applications. A recent work of Bakshi et al, NeuriPS'23, gave the first near-linear time (1+eps)-approximate algorithm, with a running time of O(ndlog(n)/eps^2). In this paper we improve the running time further, to O(nd(loglog(n)+log(1/eps))/eps^2). When eps is a constant, this reduces the gap between the upper bound and the trivial Omega(dn) lower bound significantly, from O(log n) to O(loglog n).