🤖 AI Summary
To address the high power consumption and hardware overhead of nonlinear activation functions in edge AI inference, this paper proposes a reconfigurable Taylor-series approximation engine (G-NAE), the first hardware architecture for general-purpose nonlinear function approximation featuring dynamic order selection. G-NAE integrates a dynamic Taylor-series truncation algorithm, a configurable pipelined datapath, and an RTL implementation customized for the FreePDK45 process, enabling controllable accuracy, structural uniformity, and cross-model generalization. Evaluated on CNN and Transformer models, G-NAE achieves an operating frequency exceeding 950 MHz, delivers approximately 2× higher throughput than NVDLA, reduces power consumption by ~56%, and shrinks area by ~35×. These results demonstrate significant improvements in energy efficiency and hardware scalability for accelerator-based edge AI inference.
📝 Abstract
The rapid advancement in AI architectures and the proliferation of AI-enabled systems have intensified the need for domain-specific architectures that enhance both the acceleration and energy efficiency of AI inference, particularly at the edge. This need arises from the significant resource constraints-such as computational cost and energy consumption-associated with deploying AI algorithms, which involve intensive mathematical operations across multiple layers. High-power-consuming operations, including General Matrix Multiplications (GEMMs) and activation functions, can be optimized to address these challenges. Optimization strategies for AI at the edge include algorithmic approaches like quantization and pruning, as well as hardware methodologies such as domain-specific accelerators. This paper proposes TYTAN: TaYlor-series based non-linear acTivAtion eNgine, which explores the development of a Generalized Non-linear Approximation Engine (G-NAE). TYTAN targets the acceleration of non-linear activation functions while minimizing power consumption. The TYTAN integrates a re-configurable hardware design with a specialized algorithm that dynamically estimates the necessary approximation for each activation function, aimed at achieving minimal deviation from baseline accuracy. The proposed system is validated through performance evaluations with state-of-the-art AI architectures, including Convolutional Neural Networks (CNNs) and Transformers. Results from system-level simulations using Silvaco's FreePDK45 process node demonstrate TYTAN's capability to operate at a clock frequency >950 MHz, showcasing its effectiveness in supporting accelerated, energy-efficient AI inference at the edge, which is ~2 times performance improvement, with ~56% power reduction and ~35 times lower area compared to the baseline open-source NVIDIA Deep Learning Accelerator (NVDLA) implementation.