๐ค AI Summary
This paper investigates the quantitative relationship between estimation loss due to decoder mismatch and $f$-divergence for vector channels corrupted by anisotropic Gaussian noise, overcoming the limitations of prior work restricted to isotropic Gaussian assumptions and the relative entropy framework. Methodologically, we first extend score matching to anisotropic Gaussian channels, establishing a unified theory linking statistical estimation error to distributional discrepancy; we further generalize the De Bruijn identity to the mismatched setting and introduce the relative Fisher information to characterize the differential structure of $f$-divergence. Our main contribution is the derivation of an exact analytical expression connecting $f$-divergence to mismatched estimation errorโproviding a novel theoretical tool for generalization analysis and robust design of generative models.
๐ Abstract
Relative Fisher information, also known as score matching, is a recently introduced learning method for parameter estimation. Fundamental relations between relative entropy and score matching have been established in the literature for scalar and isotropic Gaussian channels. This paper demonstrates that such relations hold for a much larger class of observation models. We introduce the vector channel where the perturbation is non-isotropic Gaussian noise. For such channels, we derive new representations that connect the $f$-divergence between two distributions to the estimation loss induced by mismatch at the decoder. This approach not only unifies but also greatly extends existing results from both the isotropic Gaussian and classical relative entropy frameworks. Building on this generalization, we extend De Bruijn's identity to mismatched non-isotropic Gaussian models and demonstrate that the connections to generative models naturally follow as a consequence application of this new result.