Noise to the Rescue: Escaping Local Minima in Neurosymbolic Local Search

📅 2025-03-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In neural-symbolic learning, coupling discrete logic—particularly Gödel logic—with neural networks faces fundamental challenges: non-differentiability of logical operations and susceptibility to local optima during optimization. To address this, we propose the “Gödel Trick”: injecting controllable noise at the logits layer, such that backpropagation becomes equivalent to perturbed deterministic local search over SAT instances. This enables fully differentiable SAT solving without probabilistic modeling and supports end-to-end neural-symbolic joint optimization. Our method leverages Gödel logic’s min/max semantics to model Boolean conjunctions and disjunctions, tightly integrating noise-aware gradients with neural architecture design. Evaluated on SATLIB benchmarks, our approach achieves significantly higher satisfiability rates; on Visual Sudoku, it attains state-of-the-art performance. The framework is computationally efficient and preserves logical interpretability—bridging symbolic reasoning and deep learning without sacrificing differentiability or fidelity.

Technology Category

Application Category

📝 Abstract
Deep learning has achieved remarkable success across various domains, largely thanks to the efficiency of backpropagation (BP). However, BP's reliance on differentiability poses challenges in neurosymbolic learning, where discrete computation is combined with neural models. We show that applying BP to Godel logic, which represents conjunction and disjunction as min and max, is equivalent to a local search algorithm for SAT solving, enabling the optimisation of discrete Boolean formulas without sacrificing differentiability. However, deterministic local search algorithms get stuck in local optima. Therefore, we propose the Godel Trick, which adds noise to the model's logits to escape local optima. We evaluate the Godel Trick on SATLIB, and demonstrate its ability to solve a broad range of SAT problems. Additionally, we apply it to neurosymbolic models and achieve state-of-the-art performance on Visual Sudoku, all while avoiding expensive probabilistic reasoning. These results highlight the Godel Trick's potential as a robust, scalable approach for integrating symbolic reasoning with neural architectures.
Problem

Research questions and friction points this paper is trying to address.

Overcoming local optima in neurosymbolic local search.
Integrating discrete Boolean formulas with neural models.
Enhancing SAT solving without probabilistic reasoning.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Combines BP with Godel logic for SAT solving
Introduces noise to escape local optima
Achieves state-of-the-art in neurosymbolic models
🔎 Similar Papers
No similar papers found.