🤖 AI Summary
In neural-symbolic learning, coupling discrete logic—particularly Gödel logic—with neural networks faces fundamental challenges: non-differentiability of logical operations and susceptibility to local optima during optimization. To address this, we propose the “Gödel Trick”: injecting controllable noise at the logits layer, such that backpropagation becomes equivalent to perturbed deterministic local search over SAT instances. This enables fully differentiable SAT solving without probabilistic modeling and supports end-to-end neural-symbolic joint optimization. Our method leverages Gödel logic’s min/max semantics to model Boolean conjunctions and disjunctions, tightly integrating noise-aware gradients with neural architecture design. Evaluated on SATLIB benchmarks, our approach achieves significantly higher satisfiability rates; on Visual Sudoku, it attains state-of-the-art performance. The framework is computationally efficient and preserves logical interpretability—bridging symbolic reasoning and deep learning without sacrificing differentiability or fidelity.
📝 Abstract
Deep learning has achieved remarkable success across various domains, largely thanks to the efficiency of backpropagation (BP). However, BP's reliance on differentiability poses challenges in neurosymbolic learning, where discrete computation is combined with neural models. We show that applying BP to Godel logic, which represents conjunction and disjunction as min and max, is equivalent to a local search algorithm for SAT solving, enabling the optimisation of discrete Boolean formulas without sacrificing differentiability. However, deterministic local search algorithms get stuck in local optima. Therefore, we propose the Godel Trick, which adds noise to the model's logits to escape local optima. We evaluate the Godel Trick on SATLIB, and demonstrate its ability to solve a broad range of SAT problems. Additionally, we apply it to neurosymbolic models and achieve state-of-the-art performance on Visual Sudoku, all while avoiding expensive probabilistic reasoning. These results highlight the Godel Trick's potential as a robust, scalable approach for integrating symbolic reasoning with neural architectures.