🤖 AI Summary
This work addresses the challenge of efficiently solving numerous related linear programming (LP) subproblems arising in mixed-integer programming (MIP), particularly in contexts such as strong branching and bound tightening, where traditional methods fail to exploit GPU parallelism effectively. The authors propose a GPU-oriented batched first-order optimization method, reformulating the primal–dual hybrid gradient algorithm into matrix–matrix operations to significantly enhance parallel efficiency. This study represents the first systematic application of batched first-order methods to LP subproblems within MIP solvers, advocating that GPUs should perform core computational tasks rather than merely assist CPU-based heuristics. The approach promotes deeper co-design between MIP algorithms and GPU architectures. Experimental results demonstrate that the proposed method outperforms conventional simplex solvers under specific problem scales and hardware configurations.
📝 Abstract
We present a batched first-order method for solving multiple linear programs in parallel on GPUs. Our approach extends the primal-dual hybrid gradient algorithm to efficiently solve batches of related linear programming problems that arise in mixed-integer programming techniques such as strong branching and bound tightening. By leveraging matrix-matrix operations instead of repeated matrix-vector operations, we obtain significant computational advantages on GPU architectures. We demonstrate the effectiveness of our approach on various case studies and identify the problem sizes where first-order methods outperform traditional simplex-based solvers depending on the computational environment one can use. This is a significant step for the design and development of integer programming algorithms tightly exploiting GPU capabilities where we argue that some specific operations should be allocated to GPUs and performed in full instead of using light-weight heuristic approaches on CPUs.