🤖 AI Summary
This work proposes KProxNPLVM, a novel approach to soft sensor modeling that addresses the accuracy limitations of traditional nonlinear probabilistic latent variable models (NPLVMs) caused by amortized variational inference with parametric posterior approximations. By introducing a proximal relaxation of the objective function into NPLVM training and employing the Wasserstein distance to construct a nonparametric variational inference strategy, the method circumvents the constraints of finite-dimensional parameter spaces. This theoretically eliminates approximation error and ensures convergence. Experimental results on both synthetic and real-world industrial datasets demonstrate that KProxNPLVM significantly outperforms existing models, achieving substantial improvements in prediction accuracy and robustness for soft sensing applications.
📝 Abstract
Nonlinear Probabilistic Latent Variable Models (NPLVMs) are a cornerstone of soft sensor modeling due to their capacity for uncertainty delineation. However, conventional NPLVMs are trained using amortized variational inference, where neural networks parameterize the variational posterior. While facilitating model implementation, this parameterization converts the distributional optimization problem within an infinite-dimensional function space to parameter optimization within a finite-dimensional parameter space, which introduces an approximation error gap, thereby degrading soft sensor modeling accuracy. To alleviate this issue, we introduce KProxNPLVM, a novel NPLVM that pivots to relaxing the objective itself and improving the NPLVM's performance. Specifically, we first prove the approximation error induced by the conventional approach. Based on this, we design the Wasserstein distance as the proximal operator to relax the learning objective, yielding a new variational inference strategy derived from solving this relaxed optimization problem. Based on this foundation, we provide a rigorous derivation of KProxNPLVM's optimization implementation, prove the convergence of our algorithm can finally sidestep the approximation error, and propose the KProxNPLVM by summarizing the abovementioned content. Finally, extensive experiments on synthetic and real-world industrial datasets are conducted to demonstrate the efficacy of the proposed KProxNPLVM.