🤖 AI Summary
Conventional program synthesis suffers from poor scalability in syntactic enumeration and low CPU execution efficiency. Method: This paper introduces the first semantics-driven, GPU-accelerated synthesis framework. Departing from traditional syntax-based enumeration, it performs parallel enumeration of logical formulas directly in semantic space—subject to positive and negative execution trace constraints. It employs low-divergence kernel functions, semantics-guided search scheduling, and memory-access optimizations to maximize GPU utilization and computational throughput. Contribution/Results: Experiments demonstrate speedups of several orders of magnitude over state-of-the-art CPU-based synthesizers on large-scale synthesis tasks, achieving both high throughput and low latency. This work establishes a generalizable, semantics-first paradigm for GPU acceleration of formal methods.
📝 Abstract
Program synthesis is an umbrella term for generating programs and logical formulae from specifications. With the remarkable performance improvements that GPUs enable for deep learning, a natural question arose: can we also implement a search-based program synthesiser on GPUs to achieve similar performance improvements? In this article we discuss our insights on this question, based on recent works~. The goal is to build a synthesiser running on GPUs which takes as input positive and negative example traces and returns a logical formula accepting the positive and rejecting the negative traces. With GPU-friendly programming techniques -- using the semantics of formulae to minimise data movement and reduce data-dependent branching -- our synthesiser scales to significantly larger synthesis problems, and operates much faster than the previous CPU-based state-of-the-art. We believe the insights that make our approach GPU-friendly have wide potential for enhancing the performance of other formal methods (FM) workloads.