🤖 AI Summary
Optimization of ICNN-regularized variational image reconstruction is hindered by nonconvexity, nonsmoothness, and the nested structure inherent in ICNNs.
Method: This paper proposes a convex-equivalent reformulation that rigorously transforms the original nonconvex, nonsmooth problem into a tractable convex optimization problem. The key innovation lies in leveraging epigraph projections of activation functions to eliminate the nested structure of ICNNs, with theoretical proof of exact equivalence. An efficient primal-dual algorithm-based solver is then developed for the reformulated problem.
Results: Experiments across multiple image reconstruction tasks demonstrate that the proposed method significantly outperforms conventional subgradient methods: it achieves markedly faster convergence, improved iteration stability, and—crucially—enables the first provably convergent convex optimization framework for ICNN-regularized models.
📝 Abstract
We address the optimization problem in a data-driven variational reconstruction framework, where the regularizer is parameterized by an input-convex neural network (ICNN). While gradient-based methods are commonly used to solve such problems, they struggle to effectively handle non-smoothness which often leads to slow convergence. Moreover, the nested structure of the neural network complicates the application of standard non-smooth optimization techniques, such as proximal algorithms. To overcome these challenges, we reformulate the problem and eliminate the network's nested structure. By relating this reformulation to epigraphical projections of the activation functions, we transform the problem into a convex optimization problem that can be efficiently solved using a primal-dual algorithm. We also prove that this reformulation is equivalent to the original variational problem. Through experiments on several imaging tasks, we demonstrate that the proposed approach outperforms subgradient methods in terms of both speed and stability.