Late Breaking Results: Conversion of Neural Networks into Logic Flows for Edge Computing

📅 2026-01-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the inefficiency of multiply-accumulate (MAC) operations that bottleneck neural network inference on resource-constrained edge-device CPUs. To overcome this limitation, the authors propose a novel approach that equivalently transforms a neural network into a decision tree, extracts decision paths leading to constant leaf nodes, and compresses them into a control-flow-dominated logic structure composed primarily of if-else statements. This transformation effectively bypasses the majority of MAC computations and represents the first efficient conversion from dataflow-centric neural networks to control-flow-centric logical programs—aligning naturally with CPU execution characteristics. Evaluated on a RISC-V CPU simulator, the method achieves up to a 14.9% reduction in inference latency while preserving model accuracy exactly.

Technology Category

Application Category

📝 Abstract
Neural networks have been successfully applied in various resource-constrained edge devices, where usually central processing units (CPUs) instead of graphics processing units exist due to limited power availability. State-of-the-art research still focuses on efficiently executing enormous numbers of multiply-accumulate (MAC) operations. However, CPUs themselves are not good at executing such mathematical operations on a large scale, since they are more suited to execute control flow logic, i.e., computer algorithms. To enhance the computation efficiency of neural networks on CPUs, in this paper, we propose to convert them into logic flows for execution. Specifically, neural networks are first converted into equivalent decision trees, from which decision paths with constant leaves are then selected and compressed into logic flows. Such logic flows consist of if and else structures and a reduced number of MAC operations. Experimental results demonstrate that the latency can be reduced by up to 14.9 % on a simulated RISC-V CPU without any accuracy degradation. The code is open source at https://github.com/TUDa-HWAI/NN2Logic
Problem

Research questions and friction points this paper is trying to address.

neural networks
edge computing
CPU efficiency
multiply-accumulate operations
logic flows
Innovation

Methods, ideas, or system contributions that make the work stand out.

logic flow
neural network conversion
edge computing
decision tree
CPU acceleration
🔎 Similar Papers
No similar papers found.