Distributed Optimization of Pairwise Polynomial Graph Spectral Functions via Subgraph Optimization

📅 2025-11-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses distributed optimization over fixed-topology graphs under a global edge-weight budget constraint, targeting finite-degree polynomial functions of the Laplacian spectrum—emphasizing coordinated control over the entire spectrum, not merely extremal eigenvalues. We propose an iterative embedding framework based on 1-hop subgraph decomposition: the global objective is reformulated into a bilinear form, and singular value decomposition (SVD) of the zero-centered (ZC) matrix enables local gradient approximation aligned with the global descent direction. A learnable edge-update proposer supports efficient, one-shot structural adjustments. Degree regularization with warm-start initialization, coupled with randomized gossip-based average-degree estimation, ensures decentralization, edge-weight positivity, and strict budget feasibility. The method scales effectively to large geometric graphs: warm-start performance reaches 95% of centralized optimization, significantly accelerating convergence and reducing the spectral objective value.

Technology Category

Application Category

📝 Abstract
We study distributed optimization of finite-degree polynomial Laplacian spectral objectives under fixed topology and a global weight budget, targeting the collective behavior of the entire spectrum rather than a few extremal eigenvalues. By re-formulating the global cost in a bilinear form, we derive local subgraph problems whose gradients approximately align with the global descent direction via an SVD-based test on the $ZC$ matrix. This leads to an iterate-and-embed scheme over disjoint 1-hop neighborhoods that preserves feasibility by construction (positivity and budget) and scales to large geometric graphs. For objectives that depend on pairwise eigenvalue differences $h(lambda_i-lambda_j)$, we obtain a quadratic upper bound in the degree vector, which motivates a ``warm-start''by degree-regularization. The warm start uses randomized gossip to estimate global average degree, accelerating subsequent local descent while maintaining decentralization, and realizing $sim95%{}$ of the performance with respect to centralized optimization. We further introduce a learning-based proposer that predicts one-shot edge updates on maximal 1-hop embeddings, yielding immediate objective reductions. Together, these components form a practical, modular pipeline for spectrum-aware weight tuning that preserves constraints and applies across a broader class of whole-spectrum costs.
Problem

Research questions and friction points this paper is trying to address.

Optimize Laplacian spectral functions under topology and budget constraints
Align local subgraph gradients with global descent via SVD decomposition
Develop decentralized pipeline for whole-spectrum weight tuning in large graphs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Distributed optimization via local subgraph problems
Warm-start using randomized gossip for degree regularization
Learning-based proposer predicts one-shot edge updates
🔎 Similar Papers
No similar papers found.