🤖 AI Summary
This work proposes a unified interpretation of α-mutual information in the context of quantitative information flow and its connection to privacy leakage. By constructing an adversarial generalized decision model and employing Kolmogorov–Nagumo averages together with q-logarithms to characterize the adversary’s gain, the study establishes α-mutual information as a specific instance of generalized g-leakage for the first time. The theoretical link between α-mutual information and generalized g-leakage reveals that the parameter α precisely corresponds to the adversary’s degree of risk aversion. This insight yields a cohesive framework for interpreting privacy leakage through the lens of information-theoretic measures, thereby deepening the understanding of the relationship between information metrics and adversarial behavior.
📝 Abstract
This paper presents a unified interpretation of $\alpha$-mutual information ($\alpha$-MI) in terms of generalized $g$-leakage. Specifically, we present a novel interpretation of $\alpha$-MI within an extended framework for quantitative information flow based on adversarial generalized decision problems. This framework employs the Kolmogorov-Nagumo mean and the $q$-logarithm to characterize adversarial gain. Furthermore, we demonstrate that, within this framework, the parameter $\alpha$ can be interpreted as a measure of the adversary's risk aversion.