Signed Graph Learning: Algorithms and Theory

📅 2025-07-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the critical problem of signed graph structure learning. We propose the first signed graph learning framework based on the net Laplacian operator. Unlike conventional unsigned-graph methods, our approach integrates low-pass filtering intuition with the signed-graph signal smoothness assumption, formulating a nonconvex optimization model that minimizes total variation. Theoretically, we establish the first convergence guarantee and derive an explicit estimation error bound, characterizing the interplay among sample size, number of nodes, and graph topology. Computationally, we design an efficient ADMM-based algorithm that reduces per-iteration complexity from $O(n^2)$ to $O(n)$, substantially improving scalability. Extensive experiments—including synthetic benchmarks and gene regulatory network inference—demonstrate that our method consistently outperforms state-of-the-art approaches in both structural recovery accuracy and computational efficiency.

Technology Category

Application Category

📝 Abstract
Real-world data is often represented through the relationships between data samples, forming a graph structure. In many applications, it is necessary to learn this graph structure from the observed data. Current graph learning research has primarily focused on unsigned graphs, which consist only of positive edges. However, many biological and social systems are better described by signed graphs that account for both positive and negative interactions, capturing similarity and dissimilarity between samples. In this paper, we develop a method for learning signed graphs from a set of smooth signed graph signals. Specifically, we employ the net Laplacian as a graph shift operator (GSO) to define smooth signed graph signals as the outputs of a low-pass signed graph filter defined by the net Laplacian. The signed graph is then learned by formulating a non-convex optimization problem where the total variation of the observed signals is minimized with respect to the net Laplacian. The proposed problem is solved using alternating direction method of multipliers (ADMM) and a fast algorithm reducing the per-ADMM iteration complexity from quadratic to linear in the number of nodes is introduced. Furthermore, theoretical proofs of convergence for the algorithm and a bound on the estimation error of the learned net Laplacian as a function of sample size, number of nodes, and graph topology are provided. Finally, the proposed method is evaluated on simulated data and gene regulatory network inference problem and compared to existing signed graph learning methods.
Problem

Research questions and friction points this paper is trying to address.

Learning signed graphs from smooth graph signals
Developing a non-convex optimization method for graph learning
Providing theoretical guarantees for algorithm convergence
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses net Laplacian as graph shift operator
Solves non-convex optimization via ADMM
Reduces iteration complexity to linear
🔎 Similar Papers
No similar papers found.
A
Abdullah Karaaslanli
Department of Electrical and Computer Engineering, Michigan State University, East Lansing, MI 48824 USA
B
Bisakh Banerjee
Department of Statistics and Probability, Michigan State University, East Lansing, MI 48824 USA
Tapabrata Maiti
Tapabrata Maiti
Michigan State University
High-dimensional Data AnalysisBiostatistical MethodsMixed ModelsBayesian MethodsSpatial Data Analysis
Selin Aviyente
Selin Aviyente
Michigan State University
Signal ProcessingComputational neuroscienceTime-frequency analysis