🤖 AI Summary
Large language models (LLMs) exhibit weak logical planning, suffer from combinatorial explosion in search space, and struggle to simultaneously ensure correctness and interpretability when solving classical constraint satisfaction problems (CSPs) such as Boolean satisfiability (SAT) and graph coloring.
Method: We propose a differentiable logic layer embedding framework that encodes first-order logic constraints as differentiable modules, jointly trained end-to-end with an LLM. This enables Boolean variables, logical structures, and constraints to participate fully in gradient-based optimization—eliminating discrete search and unifying natural-language-to-constraint modeling and solution-space refinement.
Results: On SAT and graph coloring benchmarks, our method significantly outperforms state-of-the-art prompt engineering and solver-augmented approaches, achieving concurrent improvements in solution correctness, inference efficiency, and decision interpretability. To our knowledge, this is the first work to achieve deep integration of symbolic logic and neural learning at both forward and backward propagation levels.
📝 Abstract
Considering the challenges faced by large language models (LLMs) in logical reasoning and planning, prior efforts have sought to augment LLMs with access to external solvers. While progress has been made on simple reasoning problems, solving classical constraint satisfaction problems, such as the Boolean Satisfiability Problem (SAT) and Graph Coloring Problem (GCP), remains difficult for off-the-shelf solvers due to their intricate expressions and exponential search spaces. In this paper, we propose a novel differential logic layer-aided language modeling (DiLA) approach, where logical constraints are integrated into the forward and backward passes of a network layer, to provide another option for LLM tool learning. In DiLA, LLM aims to transform the language description to logic constraints and identify initial solutions of the highest quality, while the differential logic layer focuses on iteratively refining the LLM-prompted solution. Leveraging the logic layer as a bridge, DiLA enhances the logical reasoning ability of LLMs on a range of reasoning problems encoded by Boolean variables, guaranteeing the efficiency and correctness of the solution process. We evaluate the performance of DiLA on two classic reasoning problems and empirically demonstrate its consistent outperformance against existing prompt-based and solver-aided approaches.