🤖 AI Summary
This paper addresses distributed parameter estimation over multi-task graphs, where agents pursue heterogeneous objectives yet share latent structural relationships across tasks. To enable collaborative learning, such inter-task dependencies are modeled via nonsmooth regularization—overcoming the limitation of conventional smooth regularizers in capturing piecewise-constant parameter jumps. We propose a decentralized learning framework incorporating nonsmooth penalties (e.g., ℓ₀/ℓ₁, elastic net) and employ a forward-backward splitting strategy for efficient distributed optimization. Theoretically, the algorithm is proven to converge in mean-square error to an O(μ)-optimal solution; closed-form analysis is provided even under nonconvex regularizations. Extensive simulations validate its effectiveness and robustness in modeling sparsity and piecewise-constant structures. The key contribution lies in the first systematic integration of nonsmooth regularization into multi-task graph learning, thereby relaxing restrictive smoothness assumptions and enabling more realistic, expressive modeling of task relationships.
📝 Abstract
In this work, we consider learning over multitask graphs, where each agent aims to estimate its own parameter vector. Although agents seek distinct objectives, collaboration among them can be beneficial in scenarios where relationships between tasks exist. Among the various approaches to promoting relationships between tasks and, consequently, enhancing collaboration between agents, one notable method is regularization. While previous multitask learning studies have focused on smooth regularization to enforce graph smoothness, this work explores non-smooth regularization techniques that promote sparsity, making them particularly effective in encouraging piecewise constant transitions on the graph. We begin by formulating a global regularized optimization problem, which involves minimizing the aggregate sum of individual costs, regularized by a general non-smooth term designed to promote piecewise-constant relationships between the tasks of neighboring agents. Based on the forward-backward splitting strategy, we propose a decentralized learning approach that enables efficient solutions to the regularized optimization problem. Then, under convexity assumptions on the cost functions and co-regularization, we establish that the proposed approach converges in the mean-square-error sense within $O(μ)$ of the optimal solution of the globally regularized cost. For broader applicability and improved computational efficiency, we also derive closed-form expressions for commonly used non-smooth (and, possibly, non-convex) regularizers, such as the weighted sum of the $ell_0$-norm, $ell_1$-norm, and elastic net regularization. Finally, we illustrate both the theoretical findings and the effectiveness of the approach through simulations.