🤖 AI Summary
This work proposes a vine Copula–based sequential variational inference method to overcome the limitations of traditional variational inference, which is constrained by prespecified posterior forms and fixed complexity, thereby struggling to flexibly capture complex latent dependencies. By constructing the posterior approximation tree-by-tree, the approach effectively models high-dimensional dependence structures. It employs a Rényi divergence–derived evidence lower bound to accurately recover model parameters and incorporates an adaptive stopping criterion that automatically determines the optimal model complexity without manual specification. The method achieves significantly better performance than mean-field variational inference while maintaining parameter parsimony, demonstrating particularly strong results in tasks such as sparse Gaussian processes.
📝 Abstract
We propose stepwise variational inference (VI) with vine copulas: a universal VI procedure that combines vine copulas with a novel stepwise estimation procedure of the variational parameters. Vine copulas consist of a nested sequence of trees built from copulas, where more complex latent dependence can be modeled with increasing number of trees. We propose to estimate the vine copula approximate posterior in a stepwise fashion, tree by tree along the vine structure. Further, we show that the usual backward Kullback-Leibler divergence cannot recover the correct parameters in the vine copula model, thus the evidence lower bound is defined based on the Rényi divergence. Finally, an intuitive stopping criterion for adding further trees to the vine eliminates the need to pre-define a complexity parameter of the variational distribution, as required for most other approaches. Thus, our method interpolates between mean-field VI (MFVI) and full latent dependence. In many applications, in particular sparse Gaussian processes, our method is parsimonious with parameters, while outperforming MFVI.