Untangling Lariats: Subgradient Following of Variationally Penalized Objectives

πŸ“… 2024-05-07
πŸ›οΈ arXiv.org
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This paper addresses convex optimization with variational penalties: estimating smooth time-series sequences under Bregman divergence while enforcing structured sparsity on discrete derivatives (e.g., first differences). Methodologically, it introduces the first unified subgradient tracking framework supporting arbitrary-order derivative sparsity regularization; proposes a lattice-convolutional filter-based parametrization of regularizers, accommodating nonsmooth barrier functions and β„“β‚‚/β„“βˆž group-sparse normsβ€”naturally inducing higher-order priors such as acceleration and jerk. Contributions include: (1) a unifying formulation encompassing denoising, piecewise-constant regression, and isotonic regression; (2) theoretical convergence guarantees; and (3) an efficient multidimensional higher-order filtering solver, empirically validated on group-sparse smoothing and discrete derivative control tasks.

Technology Category

Application Category

πŸ“ Abstract
We describe an apparatus for subgradient-following of the optimum of convex problems with variational penalties. In this setting, we receive a sequence $y_i,ldots,y_n$ and seek a smooth sequence $x_1,ldots,x_n$. The smooth sequence needs to attain the minimum Bregman divergence to an input sequence with additive variational penalties in the general form of $sum_i{}g_i(x_{i+1}-x_i)$. We derive known algorithms such as the fused lasso and isotonic regression as special cases of our approach. Our approach also facilitates new variational penalties such as non-smooth barrier functions. We then introduce and analyze new multivariate problems in which $mathbf{x}_i,mathbf{y}_iinmathbb{R}^d$ with variational penalties that depend on $|mathbf{x}_{i+1}-mathbf{x}_i|$. The norms we consider are $ell_2$ and $ell_infty$ which promote group sparsity. We also derive a novel lattice-based procedure for subgradient following of variational penalties characterized through the output of arbitrary convolutional filters. This paradigm yields efficient solvers for high-order filtering problems of temporal sequences in which sparse discrete derivatives such as acceleration and jerk are desirable.
Problem

Research questions and friction points this paper is trying to address.

subgradient-following in convex optimization
variational penalties for smooth sequences
group sparsity with multivariate norms
Innovation

Methods, ideas, or system contributions that make the work stand out.

Subgradient-following convex optimization
Variational penalties with Bregman divergence
Lattice-based subgradient convolutional filters
πŸ”Ž Similar Papers
No similar papers found.