🤖 AI Summary
Augustin–Csiszár and Lapidoth–Pfister α-mutual informations lack closed-form expressions, rendering their efficient and reliable computation challenging. Method: We propose the first alternating optimization algorithm with provable global convergence guarantees. By deriving a novel variational representation of Augustin–Csiszár mutual information—inspired by the Sibson form—we recast the problem into a tractable convex optimization framework. The algorithm integrates tools from convex analysis and variational inference, ensuring theoretical rigor while enhancing computational efficiency and numerical stability. Contribution/Results: Experiments across diverse channel models demonstrate consistent superiority over existing heuristic approaches. This work provides a robust, scalable, and theoretically grounded numerical tool for α-capacity evaluation, robust communication design, and generalized information-theoretic modeling.
📝 Abstract
The Augustin--Csisz{' a}r mutual information (MI) and Lapidoth--Pfister MI are well-known generalizations of the Shannon MI, but do not have known closed-form expressions, so they need to be calculated by solving optimization problems. In this study, we propose alternating optimization algorithms for computing these types of MI and present proofs of their global convergence properties. We also provide a novel variational characterization of the Augustin--Csisz{' a}r MI that is similar to that of the Sibson MI.