🤖 AI Summary
To address the challenges of irregular lookup table (LUT) compression in neural networks—namely, low hardware utilization, difficulty in compression, and susceptibility to accuracy degradation—this paper proposes a novel logic-synthesis-driven LUT compression method. First, “don’t-care” conditions are deliberately introduced as controllable degrees of freedom into input patterns not covered during training, enhancing structural self-similarity of LUTs in a data-driven manner to enable efficient decomposition. Second, hardware-aware LUT structural optimization is integrated with physical LUT mapping techniques. The method achieves up to 1.63× reduction in physical LUT resource usage while preserving model accuracy (accuracy drop ≤ 0.01 percentage points). Its core contribution lies in the pioneering reformulation of don’t-care conditions as learnable compression degrees of freedom, thereby unifying data-aware and logic-synthesis-based optimization within a single framework.
📝 Abstract
Lookup tables (LUTs) are frequently used to efficiently store arrays of precomputed values for complex mathematical computations. When used in the context of neural networks, these functions exhibit a lack of recognizable patterns which presents an unusual challenge for conventional logic synthesis techniques. Several approaches are known to break down a single large lookup table into multiple smaller ones that can be recombined. Traditional methods, such as plain tabulation, piecewise linear approximation, and multipartite table methods, often yield inefficient hardware solutions when applied to LUT-based NNs. This paper introduces ReducedLUT, a novel method to reduce the footprint of the LUTs by injecting don't cares into the compression process. This additional freedom introduces more self-similarities which can be exploited using known decomposition techniques. We then demonstrate a particular application to machine learning; by replacing unobserved patterns within the training data of neural network models with don't cares, we enable greater compression with minimal model accuracy degradation. In practice, we achieve up to $1.63 imes$ reduction in Physical LUT utilization, with a test accuracy drop of no more than $0.01$ accuracy points.