Activator: GLU Activation Function as the Core Component of a Vision Transformer

📅 2024-05-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Transformer models suffer from high computational overhead due to the Softmax-based self-attention mechanism, hindering their efficient deployment in vision tasks. To address this, we propose AttentioN-Free ViT (AF-ViT), the first Vision Transformer architecture that replaces self-attention entirely with gated linear units (GLUs) as the core building block—eliminating both the attention module and the second non-gated MLP layer in the standard ViT block. The resulting architecture is a purely MLP-based, attention-free vision model that significantly reduces FLOPs and memory footprint while preserving representational capacity. Extensive experiments demonstrate that AF-ViT achieves accuracy on par with standard ViTs across multiple benchmarks—including ImageNet classification, COCO object detection, and ADE20K semantic segmentation—while accelerating both training and inference by 35%–52%. These results validate the effectiveness and practicality of attention-free paradigms for visual representation learning.

Technology Category

Application Category

📝 Abstract
Transformer architecture currently represents the main driver behind many successes in a variety of tasks addressed by deep learning, especially the recent advances in natural language processing (NLP) culminating with large language models (LLM). In addition, transformer architecture has found a wide spread of interest from computer vision (CV) researchers and practitioners, allowing for many advancements in vision-related tasks and opening the door for multi-task and multi-modal deep learning architectures that share the same principle of operation. One drawback to these architectures is their reliance on the scaled dot product attention mechanism with the softmax activation function, which is computationally expensive and requires large compute capabilities both for training and inference. This paper investigates substituting the attention mechanism usually adopted for transformer architecture with an architecture incorporating gated linear unit (GLU) activation within a multi-layer perceptron (MLP) structure in conjunction with the default MLP incorporated in the traditional transformer design. Another step forward taken by this paper is to eliminate the second non-gated MLP to further reduce the computational cost. Experimental assessments conducted by this research show that both proposed modifications and reductions offer competitive performance in relation to baseline architectures, in support of the aims of this work in establishing a more efficient yet capable alternative to the traditional attention mechanism as the core component in designing transformer architectures.
Problem

Research questions and friction points this paper is trying to address.

Reduce computational cost in transformer architectures
Replace MLP and attention with GLU activation
Maintain competitive performance with lower complexity
Innovation

Methods, ideas, or system contributions that make the work stand out.

GLU activation replaces softmax in transformers
Reduces computational cost significantly
Competitive performance with baseline architectures
🔎 Similar Papers
No similar papers found.
A
Abdullah Nazhat Abdullah
Bahcesehir University, Istanbul, Turkiye
T
Tarkan Aydin
Bahcesehir University, Istanbul, Turkiye