Teaching Transformers to Solve Combinatorial Problems through Efficient Trial & Error

📅 2025-09-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) exhibit limited performance on combinatorial optimization problems such as Sudoku. This paper proposes a purely neural, architecture-agnostic approach—requiring no modifications to the model or external tools—that tightly integrates a standard decoder-only Transformer (based on GPT-2) with explicit depth-first search (DFS), formulated as a contextual Min-Sum Set Cover problem. The method combines imitation learning, informed guessing, and backtracking, enabling end-to-end training and inference. Its core contribution lies in the first joint modeling of implicit neural reasoning and explicit search dynamics, achieving substantial gains in search efficiency without compromising model simplicity. Evaluated on Sudoku, the approach attains 99% solving accuracy—surpassing existing neuro-symbolic systems—and establishes a new state-of-the-art for NP-class combinatorial problem solving.

Technology Category

Application Category

📝 Abstract
Despite their proficiency in various language tasks, Large Language Models (LLMs) struggle with combinatorial problems like Satisfiability, Traveling Salesman Problem, or even basic arithmetic. We address this gap through a novel approach for solving problems in the class NP. We focus on the paradigmatic task of Sudoku and achieve state-of-the-art accuracy (99%) compared to prior neuro-symbolic approaches. Unlike prior work that used custom architectures, our method employs a vanilla decoder-only Transformer (GPT-2) without external tools or function calling. Our method integrates imitation learning of simple Sudoku rules with an explicit Depth-First Search (DFS) exploration strategy involving informed guessing and backtracking. Moving beyond imitation learning, we seek to minimize the number of guesses until reaching a solution. We provide a rigorous analysis of this setup formalizing its connection to a contextual variant of Min-Sum Set Cover, a well-studied problem in algorithms and stochastic optimization.
Problem

Research questions and friction points this paper is trying to address.

Teaching Transformers to solve combinatorial problems through trial and error
Achieving high accuracy on Sudoku using imitation learning and DFS exploration
Minimizing guess count in NP problems via contextual Min-Sum Set Cover
Innovation

Methods, ideas, or system contributions that make the work stand out.

Transformer learns Sudoku via imitation and DFS
Integrates rule learning with informed guessing strategy
Minimizes guesses using contextual Min-Sum Set Cover
🔎 Similar Papers
No similar papers found.
P
Panagiotis Giannoulis
National Technical University of Athens, Greece
Y
Yorgos Pantis
National and Kapodistrian University of Athens, Greece
Christos Tzamos
Christos Tzamos
University of Athens
Ma­chine Learn­ingAlgorithmic Game TheoryAlgorithms