🤖 AI Summary
Measuring distances between high-dimensional distributions with unequal total masses—e.g., in unbalanced optimal transport or sample-size-sensitive settings—remains challenging due to the lack of scale-invariant, comparable metrics.
Method: This paper proposes a neural dual optimization approach to estimate the flat metric (i.e., bounded Lipschitz distance), integrating deep neural networks, the Kantorovich–Rubinstein dual formulation, and spectral normalization for Lipschitz regularization.
Contribution/Results: We introduce, for the first time, a learnable Lipschitz-constrained test function network trained within a unified framework, ensuring cross-distribution comparability—overcoming the scale inconsistency inherent in conventional independent training. Extensive experiments on multiple synthetic and real-world benchmarks with ground-truth distances demonstrate superior estimation accuracy and cross-model stability compared to normalized Wasserstein-1 and other baselines.
📝 Abstract
We provide an implementation to compute the flat metric in any dimension. The flat metric, also called dual bounded Lipschitz distance, generalizes the well-known Wasserstein distance W1 to the case that the distributions are of unequal total mass. This is of particular interest for unbalanced optimal transport tasks and for the analysis of data distributions where the sample size is important or normalization is not possible. The core of the method is based on a neural network to determine on optimal test function realizing the distance between two given measures. Special focus was put on achieving comparability of pairwise computed distances from independently trained networks. We tested the quality of the output in several experiments where ground truth was available as well as with simulated data.