Large-Scale Network Utility Maximization via GPU-Accelerated Proximal Message Passing

📅 2025-09-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses large-scale Network Utility Maximization (NUM), aiming to optimize multi-flow resource allocation under link capacity constraints to maximize aggregate utility. To overcome limitations of conventional methods—namely poor scalability, insufficient support for heterogeneous utility functions, and numerical fragility—we propose a GPU-accelerated proximal message-passing algorithm. Built upon a variant of the Alternating Direction Method of Multipliers (ADMM), it integrates sparse matrix-vector multiplication with element-wise proximal operators, enabling full parallelization and supporting non-strictly convex and heterogeneous utility functions. Evaluated on problem instances with up to ten million variables and constraints, our method achieves 4–20× speedup over state-of-the-art solvers while overcoming memory bottlenecks. It further demonstrates robust numerical performance under congestion and link degradation. To our knowledge, this is the first work to systematically deploy a scalable, robust, and hardware-efficient distributed optimization framework for large-scale NUM.

Technology Category

Application Category

📝 Abstract
We present a GPU-accelerated proximal message passing algorithm for large-scale network utility maximization (NUM). NUM is a fundamental problem in resource allocation, where resources are allocated across various streams in a network to maximize total utility while respecting link capacity constraints. Our method, a variant of ADMM, requires only sparse matrix-vector multiplies with the link-route matrix and element-wise proximal operator evaluations, enabling fully parallel updates across streams and links. It also supports heterogeneous utility types, including logarithmic utilities common in NUM, and does not assume strict concavity. We implement our method in PyTorch and demonstrate its performance on problems with tens of millions of variables and constraints, achieving 4x to 20x speedups over existing CPU and GPU solvers and solving problem sizes that exhaust the memory of baseline methods. Additionally, we show that our algorithm is robust to congestion and link-capacity degradation. Finally, using a time-expanded transit seat allocation case study, we illustrate how our approach yields interpretable allocations in realistic networks.
Problem

Research questions and friction points this paper is trying to address.

Solving large-scale network utility maximization problems efficiently
Handling resource allocation with link capacity constraints
Supporting heterogeneous utility types without strict concavity
Innovation

Methods, ideas, or system contributions that make the work stand out.

GPU-accelerated proximal message passing algorithm
Sparse matrix-vector multiplies with parallel updates
PyTorch implementation supporting heterogeneous utility types
🔎 Similar Papers
No similar papers found.
A
Akshay Sreekumar
Department of Electrical Engineering, Stanford University
A
Anthony Degleris
Gridmatic Inc
Ram Rajagopal
Ram Rajagopal
Associate Professor, Stanford University
Energy systemssmart gridpower systemsenergy data analyticssmart transportation & sensor networks