🤖 AI Summary
This work addresses program synthesis by proposing a general relational decomposition framework: input-output examples are encoded as sets of logical facts, and the mapping between them is explicitly modeled as a logical relation. Methodologically, it formalizes program synthesis as a relational subtask decomposition problem—marking the first such formulation—and leverages inductive logic programming (ILP) for interpretable, model-agnostic relation learning and inference. Crucially, no domain-specific architecture or task customization is required, enabling cross-task generalization. Evaluated on four challenging benchmarks, the approach significantly outperforms standard sequence- and tree-based neural models. Moreover, by interfacing with off-the-shelf ILP solvers, it surpasses state-of-the-art domain-specific synthesizers across multiple benchmarks. The contribution is a novel, interpretable, modular, and model-independent paradigm for program synthesis grounded in relational logic and ILP.
📝 Abstract
We introduce a relational approach to program synthesis. The key idea is to decompose synthesis tasks into simpler relational synthesis subtasks. Specifically, our representation decomposes a training input-output example into sets of input and output facts respectively. We then learn relations between the input and output facts. We demonstrate our approach using an off-the-shelf inductive logic programming (ILP) system on four challenging synthesis datasets. Our results show that (i) our representation can outperform a standard one, and (ii) an off-the-shelf ILP system with our representation can outperform domain-specific approaches.