TYTAN: Taylor-series based Non-Linear Activation Engine for Deep Learning Accelerators

📅 2025-12-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the high power consumption and hardware overhead of nonlinear activation functions in edge AI inference, this paper proposes a reconfigurable Taylor-series approximation engine (G-NAE), the first hardware architecture for general-purpose nonlinear function approximation featuring dynamic order selection. G-NAE integrates a dynamic Taylor-series truncation algorithm, a configurable pipelined datapath, and an RTL implementation customized for the FreePDK45 process, enabling controllable accuracy, structural uniformity, and cross-model generalization. Evaluated on CNN and Transformer models, G-NAE achieves an operating frequency exceeding 950 MHz, delivers approximately 2× higher throughput than NVDLA, reduces power consumption by ~56%, and shrinks area by ~35×. These results demonstrate significant improvements in energy efficiency and hardware scalability for accelerator-based edge AI inference.

Technology Category

Application Category

📝 Abstract
The rapid advancement in AI architectures and the proliferation of AI-enabled systems have intensified the need for domain-specific architectures that enhance both the acceleration and energy efficiency of AI inference, particularly at the edge. This need arises from the significant resource constraints-such as computational cost and energy consumption-associated with deploying AI algorithms, which involve intensive mathematical operations across multiple layers. High-power-consuming operations, including General Matrix Multiplications (GEMMs) and activation functions, can be optimized to address these challenges. Optimization strategies for AI at the edge include algorithmic approaches like quantization and pruning, as well as hardware methodologies such as domain-specific accelerators. This paper proposes TYTAN: TaYlor-series based non-linear acTivAtion eNgine, which explores the development of a Generalized Non-linear Approximation Engine (G-NAE). TYTAN targets the acceleration of non-linear activation functions while minimizing power consumption. The TYTAN integrates a re-configurable hardware design with a specialized algorithm that dynamically estimates the necessary approximation for each activation function, aimed at achieving minimal deviation from baseline accuracy. The proposed system is validated through performance evaluations with state-of-the-art AI architectures, including Convolutional Neural Networks (CNNs) and Transformers. Results from system-level simulations using Silvaco's FreePDK45 process node demonstrate TYTAN's capability to operate at a clock frequency >950 MHz, showcasing its effectiveness in supporting accelerated, energy-efficient AI inference at the edge, which is ~2 times performance improvement, with ~56% power reduction and ~35 times lower area compared to the baseline open-source NVIDIA Deep Learning Accelerator (NVDLA) implementation.
Problem

Research questions and friction points this paper is trying to address.

Optimizes non-linear activation functions for edge AI accelerators
Reduces power consumption in AI inference hardware
Improves performance and energy efficiency of deep learning accelerators
Innovation

Methods, ideas, or system contributions that make the work stand out.

Taylor-series based non-linear activation engine
Re-configurable hardware with dynamic approximation algorithm
High-frequency operation with significant power and area reduction
🔎 Similar Papers
No similar papers found.
S
Soham Pramanik
Department of Electronics and Telecommunication Engineering, Jadavpur University, Kolkata, India
V
Vimal William
SandLogic Technologies, Bangalore, India
Arnab Raha
Arnab Raha
Senior Research Scientist, NPU Advanced Architecture, Intel AI
Approximate ComputingHardware Accelerator DesignLow Power Embedded SystemsSystem-on-Chips
Debayan Das
Debayan Das
Assistant Professor, ESE, IISc Bangalore; Ex-Research Scientist, Intel Labs; PhD, Purdue University
Mixed Signal IC DesignHardware SecurityBiomedical Circuits/Systems
A
Amitava Mukherjee
Department of Computer Science and Engineering, Amrita University, Amritapuri, Kollam, Kerala, India
J
Janet L. Paluh
College of Nanoscale Science and Engineering, Nanobioscience, SUNY Polytechnic Institute, Albany, New York, USA