🤖 AI Summary
For the Max-Cut problem on large-scale graphs, this paper proposes a scalable continuous optimization framework. Methodologically, it employs dimension-lifted quadratic programming relaxation to escape poor stationary points in nonconvex optimization and introduces the DECO algorithm, which alternates between lifted and reduced-dimensional representations. The framework integrates projected gradient ascent, batch-parallel GPU acceleration, importance-aware multi-initialization, and population-based evolutionary hyperparameter search. Unlike existing approaches reliant on training data or random sampling, our method achieves state-of-the-art or superior solution quality across diverse graph structures. It features linear memory complexity and strong scalability, significantly improving both efficiency and stability for solving Max-Cut on large sparse graphs. Experimental results demonstrate consistent performance gains over competitive baselines, particularly on graphs with millions of vertices and edges, while maintaining computational tractability and robust convergence behavior.
📝 Abstract
We propose a scalable framework for solving the Maximum Cut (MaxCut) problem in large graphs using projected gradient ascent on quadratic objectives. Notably, while our approach is differentiable and leverages GPUs for gradient-based optimization, it is not a machine learning method and does not require training data beyond the given problem formulation. Starting from a continuous relaxation of the classical quadratic binary formulation, we present a parallelized strategy that explores multiple initialization vectors in batch, offering an efficient and memory-friendly alternative to traditional solvers. We analyze the relaxed objective, showing it is convex and has fixed-points corresponding to local optima -- particularly at boundary points -- highlighting a key challenge in non-convex optimization. To address this, we introduce a lifted quadratic formulation that over-parameterizes the solution space, allowing the algorithm to escape poor fixed-points. We also provide a theoretical characterization of these lifted fixed-points. Finally, we propose DECO, a dimension-alternating algorithm that switches between the unlifted and lifted formulations, leveraging their complementary strengths along with importance-based degree initialization and a population-based evolutionary hyper-parameter search. Experiments on diverse graph families show that our methods attain comparable or superior performance relative to recent training-data-intensive, dataless, and GPU-accelerated sampling approaches.