DROP: Distributionally Robust Optimization for Multi-task Learning in Graphical Models

📅 2026-03-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the vulnerability of Gaussian graphical models in estimating precision matrices under high-dimensional, heavy-tailed, or contaminated data, which often distorts inferred conditional dependence structures. To overcome this limitation, the authors propose DROP, a novel method that, for the first time, integrates distributionally robust optimization into a multi-task node-wise regression framework, augmented with structured sparsity regularization. DROP simultaneously ensures robustness against data perturbations and enforces sparsity while preserving provable theoretical error bounds. Empirical results demonstrate that the method reliably recovers modular network structures even under severe contamination: simulations show markedly reduced false-positive edge rates, and real-world fMRI analyses outperform existing non-robust approaches. The implementation will be made publicly available.

Technology Category

Application Category

📝 Abstract
Gaussian Graphical Models (GGMs) are widely used to infer conditional dependence structures in high-dimensional data. However, standard precision matrix estimators are highly sensitive to data contamination, such as extreme outliers and heavy-tailed noise. In this paper, we propose DROP (Distributionally Robust Optimization), a robust estimation method formulated within a multi-task nodewise regression framework. The proposed estimator enforces structural sparsity while resisting the influence of corrupted observations. Theoretically, we establish error bounds for the DROP estimator under general contamination. Through extensive high-dimensional simulations, we demonstrate that DROP consistently controls the rate of false positive edges and outperforms conventional non-robust estimators when data deviate from standard Gaussian assumptions. Furthermore, in a functional MRI (fMRI) application, DROP maintains a stable graph structure and preserves network modularity even when subjected to severe data perturbations, whereas competing methods yield excessively dense networks. To facilitate reproducible research, the DROP R package will be made publicly available on GitHub.
Problem

Research questions and friction points this paper is trying to address.

Gaussian Graphical Models
data contamination
robust estimation
multi-task learning
precision matrix
Innovation

Methods, ideas, or system contributions that make the work stand out.

Distributionally Robust Optimization
Gaussian Graphical Models
Multi-task Learning
Robust Estimation
Structural Sparsity
C
Canruo Shen
Department of Computer Science, Mathematics, Physics and Statistics, University of British Columbia
X
Xintong Ji
Department of Data Science and Artificial Intelligence, The Hong Kong Polytechnic University
Q
Qiong Li
Guangdong Provincial Key Laboratory of Interdisciplinary Research and Application for Data Science, Beijing Normal-Hong Kong Baptist University
W
Wenzhi Yang
School of Big Data and Statistics, Anhui University
Xiaoping Shi
Xiaoping Shi
University of British Columbia
Change pointData depthData sharpeningGraph theoryClustering