KLay: Accelerating Arithmetic Circuits for Neurosymbolic AI

📅 2024-10-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In neural-symbolic AI, the inherent irregular sparsity of arithmetic circuits severely limits parallelization efficiency and scalability on GPUs. To address this, we propose KLay—a novel sparse data structure specifically designed for arithmetic circuits—along with three specialized algorithms: (1) lossless circuit-to-KLay mapping, (2) sparse-tensor-based parallel compilation, and (3) dynamic GPU evaluation optimization supporting end-to-end differentiability. Our approach enables, for the first time, scalable, low-overhead, and fully differentiable execution of arithmetic circuits on GPUs. Experiments demonstrate that KLay achieves 10×–100× throughput improvement and training speedup over state-of-the-art methods on representative neural-symbolic tasks, significantly enhancing deployability in large-scale, real-world scenarios.

Technology Category

Application Category

📝 Abstract
A popular approach to neurosymbolic AI involves mapping logic formulas to arithmetic circuits (computation graphs consisting of sums and products) and passing the outputs of a neural network through these circuits. This approach enforces symbolic constraints onto a neural network in a principled and end-to-end differentiable way. Unfortunately, arithmetic circuits are challenging to run on modern AI accelerators as they exhibit a high degree of irregular sparsity. To address this limitation, we introduce knowledge layers (KLay), a new data structure to represent arithmetic circuits that can be efficiently parallelized on GPUs. Moreover, we contribute two algorithms used in the translation of traditional circuit representations to KLay and a further algorithm that exploits parallelization opportunities during circuit evaluations. We empirically show that KLay achieves speedups of multiple orders of magnitude over the state of the art, thereby paving the way towards scaling neurosymbolic AI to larger real-world applications.
Problem

Research questions and friction points this paper is trying to address.

Accelerating arithmetic circuits for neurosymbolic AI
Overcoming irregular sparsity in arithmetic circuits
Enabling GPU-efficient parallelization for neurosymbolic AI
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introduces knowledge layers (KLay)
Efficient GPU parallelization of arithmetic circuits
Algorithms for circuit translation and evaluation
🔎 Similar Papers
No similar papers found.