๐ค AI Summary
Odd sigmoidal activation functions (e.g., tanh, sinh) suffer from saturation, variance collapse, and learning-rate sensitivity in deep networks. Method: We propose a signal-preserving weight initialization scheme, formally defining the odd sigmoidal function class and deriving a closed-form noise scale based on its statistical properties to stabilize activation variance across target depthsโwithout batch normalization. Contribution/Results: Our method unifies initialization for all members of this class, relaxing the implicit monotonicity-and-boundedness assumptions underlying Xavier/He initialization. Experiments demonstrate significantly improved convergence robustness and data efficiency in deep architectures and few-shot settings, thereby expanding the design space and practical applicability of nonlinear activation functions.
๐ Abstract
Activation functions critically influence trainability and expressivity, and recent work has therefore explored a broad range of nonlinearities. However, activations and weight initialization are interdependent: without an appropriate initialization method, nonlinearities can cause saturation, variance collapse, and increased learning rate sensitivity. We address this by defining an odd sigmoid function class and, given any activation f in this class, proposing an initialization method tailored to f. The method selects a noise scale in closed form so that forward activations remain well dispersed up to a target layer, thereby avoiding collapse to zero or saturation. Empirically, the approach trains reliably without normalization layers, exhibits strong data efficiency, and enables learning for activations under which standard initialization methods (Xavier, He, Orthogonal) often do not converge reliably.