Efficient Distributed Learning over Decentralized Networks with Convoluted Support Vector Machine

๐Ÿ“… 2025-03-10
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Distributed high-dimensional classification in decentralized networks suffers from slow convergence due to the double nonsmoothness of hinge loss and โ„“โ‚-regularization in standard penalized SVMs. Method: This paper proposes a novel distributed SVM framework leveraging convolutional smoothingโ€”a differentiable convex approximation of the hinge lossโ€”and a generalized alternating direction method of multipliers (ADMM) tailored to arbitrary decentralized network topologies. Contribution/Results: We establish linear convergence and near-optimal statistical accuracy for the proposed algorithm, along with exact recovery of the sparse parameter support set. Extensive experiments on synthetic and real-world datasets demonstrate that our method significantly outperforms existing distributed SVM approaches in communication efficiency, classification accuracy, and sparsity.

Technology Category

Application Category

๐Ÿ“ Abstract
This paper addresses the problem of efficiently classifying high-dimensional data over decentralized networks. Penalized support vector machines (SVMs) are widely used for high-dimensional classification tasks. However, the double nonsmoothness of the objective function poses significant challenges in developing efficient decentralized learning methods. Many existing procedures suffer from slow, sublinear convergence rates. To overcome this limitation, we consider a convolution-based smoothing technique for the nonsmooth hinge loss function. The resulting loss function remains convex and smooth. We then develop an efficient generalized alternating direction method of multipliers (ADMM) algorithm for solving penalized SVM over decentralized networks. Our theoretical contributions are twofold. First, we establish that our generalized ADMM algorithm achieves provable linear convergence with a simple implementation. Second, after a sufficient number of ADMM iterations, the final sparse estimator attains near-optimal statistical convergence and accurately recovers the true support of the underlying parameters. Extensive numerical experiments on both simulated and real-world datasets validate our theoretical findings.
Problem

Research questions and friction points this paper is trying to address.

Efficient classification of high-dimensional data in decentralized networks.
Overcoming slow convergence in decentralized learning with penalized SVMs.
Developing a generalized ADMM algorithm for linear convergence and accurate support recovery.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Convolution-based smoothing for hinge loss
Generalized ADMM for decentralized networks
Linear convergence with simple implementation
๐Ÿ”Ž Similar Papers
No similar papers found.
C
Canyi Chen
Department of Biostatistics, University of Michigan
Nan Qiao
Nan Qiao
Amazon
Semantic SegmentationRepresentation LearningActive Learning
L
Liping Zhu
Institute of Statistics and Big Data, Renmin University of China