🤖 AI Summary
This work addresses the convergence rate of gradient flows for entropy functionals—such as KL divergence and χ² divergence—under the Hellinger–Kantorovich (HK) geometry. Conventional log-Sobolev techniques fail in the HK setting due to its coupled transport and reaction dynamics. To overcome this, we propose a novel “shape–mass decomposition” framework that decouples Wasserstein-type transport from Hellinger-type proliferation/death dynamics. Building on this, we establish, for the first time, a complete theory of global exponential decay for entropy functionals along HK gradient flows, leveraging a generalized Polyak–Łojasiewicz inequality and extended dissipation estimates. Our results quantitatively unify the exponential decay rates of KL and χ² divergences under Wasserstein, Hellinger, and HK flows. This provides a geometrically principled and analytically rigorous foundation for probabilistic modeling, statistical inference, and optimization in machine learning.
📝 Abstract
We investigate a family of gradient flows of positive and probability measures, focusing on the Hellinger-Kantorovich (HK) geometry, which unifies transport mechanism of Otto-Wasserstein, and the birth-death mechanism of Hellinger (or Fisher-Rao). A central contribution is a complete characterization of global exponential decay behaviors of entropy functionals (e.g. KL, $chi^2$) under Otto-Wasserstein and Hellinger-type gradient flows. In particular, for the more challenging analysis of HK gradient flows on positive measures -- where the typical log-Sobolev arguments fail -- we develop a specialized shape-mass decomposition that enables new analysis results. Our approach also leverages the (Polyak-)L{}ojasiewicz-type functional inequalities and a careful extension of classical dissipation estimates. These findings provide a unified and complete theoretical framework for gradient flows and underpin applications in computational algorithms for statistical inference, optimization, and machine learning.