🤖 AI Summary
This work investigates the fundamental cause of the sharp performance degradation of Graph Neural Networks (GNNs) on hard Boolean Satisfiability (SAT) instances. To address the limited expressive power of existing GNN architectures on such instances, we introduce graph Ricci curvature—previously unexplored in SAT solving—as a geometric analytical tool. We theoretically establish a connection between negative Ricci curvature, local connectivity bottlenecks, and the “over-compression” phenomenon in GNN message passing. Through synthetic random k-SAT instance generation, curvature computation, theoretical analysis, and extensive evaluation across multiple benchmarks, we empirically validate that negative curvature induces representation collapse and generalization failure in GNNs, and further demonstrate that curvature serves as an effective predictor of model performance degradation. Our findings provide a novel geometric perspective on the limitations of GNNs in logical reasoning tasks and lay the theoretical foundation for designing curvature-aware, geometrically robust SAT solvers.
📝 Abstract
Graph Neural Networks (GNNs) have recently shown promise as solvers for Boolean Satisfiability Problems (SATs) by operating on graph representations of logical formulas. However, their performance degrades sharply on harder instances, raising the question of whether this reflects fundamental architectural limitations. In this work, we provide a geometric explanation through the lens of graph Ricci Curvature (RC), which quantifies local connectivity bottlenecks. We prove that bipartite graphs derived from random k-SAT formulas are inherently negatively curved, and that this curvature decreases with instance difficulty. Building on this, we show that GNN-based SAT solvers are affected by oversquashing, a phenomenon where long-range dependencies become impossible to compress into fixed-length representations. We validate our claims empirically across different SAT benchmarks and confirm that curvature is both a strong indicator of problem complexity and can be used to predict performance. Finally, we connect our findings to design principles of existing solvers and outline promising directions for future work.