TransAxx: Efficient Transformers with Approximate Computing

📅 2024-02-12
🏛️ IEEE Transactions on Circuits and Systems for Artificial Intelligence
📈 Citations: 2
Influential: 0
📄 PDF
🤖 AI Summary
Vision Transformers (ViTs) suffer from high computational overhead and lack native support for approximate computing, hindering their deployment on resource-constrained, low-power devices. Method: This paper proposes TransAxx, an end-to-end approximate computing framework for ViTs. It is the first to systematically integrate approximation into ViT inference, encompassing sensitivity analysis of approximate multipliers, approximation-aware fine-tuning, and hardware-coordinated accelerator generation. TransAxx innovatively combines Monte Carlo Tree Search (MCTS) with hardware-driven policies to automate the search for optimal approximate accelerator configurations and jointly optimize power consumption and accuracy. Contribution/Results: Implemented via PyTorch extension and configurable approximate multipliers, TransAxx achieves significant energy efficiency gains and substantial power reduction on ImageNet, with only controlled accuracy degradation versus baseline ViT. Experimental results validate the feasibility of strong accuracy–power trade-offs in ViT models.

Technology Category

Application Category

📝 Abstract
Vision Transformer (ViT) models which were recently introduced by the transformer architecture have shown to be very competitive and often become a popular alternative to Convolutional Neural Networks (CNNs). However, the high computational requirements of these models limit their practical applicability especially on low-power devices. Current state-of-the-art employs approximate multipliers to address the highly increased compute demands of DNN accelerators but no prior research has explored their use on ViT models. In this work we propose TransAxx, a framework based on the popular PyTorch library that enables fast inherent support for approximate arithmetic to seamlessly evaluate the impact of approximate computing on DNNs such as ViT models. Using TransAxx we analyze the sensitivity of transformer models on the ImageNet dataset to approximate multiplications and perform approximate-aware finetuning to regain accuracy. Furthermore, we propose a methodology to generate approximate accelerators for ViT models. Our approach uses a Monte Carlo Tree Search (MCTS) algorithm to efficiently search the space of possible configurations using a hardware-driven hand-crafted policy. Our evaluation demonstrates the efficacy of our methodology in achieving significant trade-offs between accuracy and power, resulting in substantial gains without compromising on performance.
Problem

Research questions and friction points this paper is trying to address.

High computational demands limit ViT model practicality on low-power devices
Lack of research on approximate multipliers for Vision Transformer models
Need for efficient framework to evaluate approximate computing impact on ViTs
Innovation

Methods, ideas, or system contributions that make the work stand out.

TransAxx framework enables approximate arithmetic for ViTs
Monte Carlo Tree Search optimizes ViT accelerator configurations
Approximate-aware finetuning regains accuracy on ImageNet
🔎 Similar Papers
No similar papers found.