Leveraging Continuously Differentiable Activation Functions for Learning in Quantized Noisy Environments

📅 2024-02-04
🏛️ arXiv.org
📈 Citations: 2
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenge of degraded convergence and accuracy in deep learning models deployed on analog hardware due to quantization noise, this work systematically investigates the robustness of gradient propagation through activation functions under noise. We identify that ReLU exhibits gradient errors near zero up to two orders of magnitude larger than those of continuously differentiable activations (e.g., GELU, SiLU), revealing— for the first time from a gradient-error perspective—its intrinsic noise sensitivity. Based on this insight, we propose a general optimization strategy: replacing ReLU with smooth alternatives. Extensive validation across CNNs, FCNs, and Transformers demonstrates that adopting GELU or SiLU significantly improves training stability and final accuracy under quantization noise. Our findings provide both theoretical foundations and practical guidelines for deploying reliable deep neural networks on high-noise neuromorphic and in-memory computing platforms.

Technology Category

Application Category

📝 Abstract
Real-world analog systems intrinsically suffer from noise that can impede model convergence and accuracy on a variety of deep learning models. We demonstrate that differentiable activations like GELU and SiLU enable robust propagation of gradients which help to mitigate analog quantization error that is ubiquitous to all analog systems. We perform analysis and training of convolutional, linear, and transformer networks in the presence of quantized noise. Here, we are able to demonstrate that continuously differentiable activation functions are significantly more noise resilient over conventional rectified activations. As in the case of ReLU, the error in gradients are 100x higher than those in GELU near zero. Our findings provide guidance for selecting appropriate activations to realize performant and reliable hardware implementations across several machine learning domains such as computer vision, signal processing, and beyond. Code available at: href{https://github.com/Vivswan/GeLUReLUInterpolation}{https://github.com/Vivswan/GeLUReLUInterpolation}.}
Problem

Research questions and friction points this paper is trying to address.

Deep Learning
Noise Robustness
ReLU Activation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Smooth Activation Functions
Noise Robustness
Quantization Error Reduction
🔎 Similar Papers
No similar papers found.
V
Vivswan Shah
Department of Electrical and Computer Engineering, Swanson School of Engineering, University of Pittsburgh, Pittsburgh, PA 15261 USA
Nathan Youngblood
Nathan Youngblood
University of Pittsburgh
silicon photonics2D materialsphase-change photonicsin-memory photonic computing