🤖 AI Summary
High-order modeling in machine learning often requires handling data represented as probability distributions—e.g., mixtures of class-conditional distributions—necessitating optimization over distribution spaces. Method: This paper introduces the Wasserstein-over-Wasserstein (WoW) gradient flow framework, the first to define the WoW distance and its differential structure, enabling differentiable dynamical optimization on infinite-dimensional distribution spaces. It incorporates a sliced Wasserstein kernel-driven Maximum Mean Discrepancy (MMD) functional to realize end-to-end, dataset-level flow learning. The approach unifies optimal transport theory, Wasserstein gradient flows, and two-level distributional embedding. Contribution/Results: Evaluated on transfer learning and dataset distillation, WoW flows substantially improve model generalization and compression ratios. Experiments demonstrate robust convergence, driving empirical distributions toward target domains or compact representations with stability and fidelity.
📝 Abstract
Many applications in machine learning involve data represented as probability distributions. The emergence of such data requires radically novel techniques to design tractable gradient flows on probability distributions over this type of (infinite-dimensional) objects. For instance, being able to flow labeled datasets is a core task for applications ranging from domain adaptation to transfer learning or dataset distillation. In this setting, we propose to represent each class by the associated conditional distribution of features, and to model the dataset as a mixture distribution supported on these classes (which are themselves probability distributions), meaning that labeled datasets can be seen as probability distributions over probability distributions. We endow this space with a metric structure from optimal transport, namely the Wasserstein over Wasserstein (WoW) distance, derive a differential structure on this space, and define WoW gradient flows. The latter enables to design dynamics over this space that decrease a given objective functional. We apply our framework to transfer learning and dataset distillation tasks, leveraging our gradient flow construction as well as novel tractable functionals that take the form of Maximum Mean Discrepancies with Sliced-Wasserstein based kernels between probability distributions.