TeLU Activation Function for Fast and Stable Deep Learning

📅 2024-12-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address pervasive issues in deep learning—namely gradient vanishing, slow convergence, and training instability—this paper proposes TeLU, a novel analytic activation function defined as $f(x) = x cdot anh(exp(x))$. TeLU is the first to couple exponential mapping with the hyperbolic tangent, inheriting ReLU’s computational efficiency and sparsity while preserving the gradient stability and differentiability of smooth activations; it further satisfies identity approximation near zero. Theoretical analysis establishes its universal approximation capability and favorable convergence properties. Extensive experiments demonstrate consistent improvements: on ResNet-18 (ImageNet), Dynamic-Pooling Transformer (Text8), and RNNs (PTB), TeLU acts as a plug-and-play replacement that accelerates training and boosts final accuracy. These results validate its architectural robustness and generalization superiority across diverse model families.

Technology Category

Application Category

📝 Abstract
We propose the Hyperbolic Tangent Exponential Linear Unit (TeLU), a neural network hidden activation function defined as TeLU(x)=xtanh(exp(x)). TeLU's design is grounded in the core principles of key activation functions, achieving strong convergence by closely approximating the identity function in its active region while effectively mitigating the vanishing gradient problem in its saturating region. Its simple formulation enhances computational efficiency, leading to improvements in scalability and convergence speed. Unlike many modern activation functions, TeLU seamlessly combines the simplicity and effectiveness of ReLU with the smoothness and analytic properties essential for learning stability in deep neural networks. TeLU's ability to mimic the behavior and optimal hyperparameter settings of ReLU, while introducing the benefits of smoothness and curvature, makes it an ideal drop-in replacement. Its analytic nature positions TeLU as a powerful universal approximator, enhancing both robustness and generalization across a multitude of experiments. We rigorously validate these claims through theoretical analysis and experimental validation, demonstrating TeLU's performance across challenging benchmarks; including ResNet18 on ImageNet, Dynamic-Pooling Transformers on Text8, and Recurrent Neural Networks (RNNs) on the Penn TreeBank dataset. These results highlight TeLU's potential to set a new standard in activation functions, driving more efficient and stable learning in deep neural networks, thereby accelerating scientific discoveries across various fields.
Problem

Research questions and friction points this paper is trying to address.

Activation Function
Deep Learning
Gradient Vanishing
Innovation

Methods, ideas, or system contributions that make the work stand out.

TeLU
Activation Function
Gradient Vanishing Problem
🔎 Similar Papers
2023-02-27arXiv.orgCitations: 0