CONGO: Compressive Online Gradient Optimization with Application to Microservices Management

📅 2024-07-08
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses sparse-gradient scenarios in zeroth-order online convex optimization—e.g., large-scale microservice queueing networks—where end-to-end latency and resource cost must be jointly optimized. Since latency depends only on a few queue resources, gradients are intrinsically sparse, yet conventional zeroth-order methods suffer from the curse of dimensionality. We propose the first compressed-sensing-based sparse gradient estimation framework for online optimization, enabling optimal regret bounds that scale with time horizon rather than dimension. Crucially, our gradient estimation sample complexity depends solely on sparsity level—not ambient dimension. Evaluated on microservice benchmarks, our method achieves several-fold higher sample efficiency than standard zeroth-order gradient descent, reduces end-to-end latency by 18%–32%, and cuts resource costs by 21%.

Technology Category

Application Category

📝 Abstract
We address the challenge of zeroth-order online convex optimization where the objective function's gradient exhibits sparsity, indicating that only a small number of dimensions possess non-zero gradients. Our aim is to leverage this sparsity to obtain useful estimates of the objective function's gradient even when the only information available is a limited number of function samples. Our motivation stems from the optimization of large-scale queueing networks that process time-sensitive jobs. Here, a job must be processed by potentially many queues in sequence to produce an output, and the service time at any queue is a function of the resources allocated to that queue. Since resources are costly, the end-to-end latency for jobs must be balanced with the overall cost of the resources used. While the number of queues is substantial, the latency function primarily reacts to resource changes in only a few, rendering the gradient sparse. We tackle this problem by introducing the Compressive Online Gradient Optimization framework which allows compressive sensing methods previously applied to stochastic optimization to achieve regret bounds with an optimal dependence on the time horizon without the full problem dimension appearing in the bound. For specific algorithms, we reduce the samples required per gradient estimate to scale with the gradient's sparsity factor rather than its full dimensionality. Numerical simulations and real-world microservices benchmarks demonstrate CONGO's superiority over gradient descent approaches that do not account for sparsity.
Problem

Research questions and friction points this paper is trying to address.

Zeroth-order online convex optimization with sparse gradients
Optimizing large-scale queueing networks with time-sensitive jobs
Reducing resource costs while balancing end-to-end job latency
Innovation

Methods, ideas, or system contributions that make the work stand out.

Compressive sensing in gradient optimization
Sparsity-aware gradient estimation
Reduced samples for sparse gradients
🔎 Similar Papers
No similar papers found.