Mining Generalizable Activation Functions

📅 2026-02-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work proposes a large language model (LLM)-driven evolutionary search method for automatically discovering activation functions that exhibit strong out-of-distribution generalization and encode specific inductive biases. Leveraging the AlphaEvolve framework, the approach employs an LLM as a mutation operator to directly search within the space of Python functions under FLOP constraints, without requiring a predefined search space, and uses out-of-distribution performance as the fitness metric. To the best of our knowledge, this is the first study to apply LLM-guided evolutionary algorithms to the discovery of activation functions, substantially enhancing both search flexibility and generalization-oriented design. Experiments on small-scale synthetic datasets successfully uncover novel, highly efficient activation functions, demonstrating the effectiveness and efficiency of the proposed method.

Technology Category

Application Category

📝 Abstract
The choice of activation function is an active area of research, with different proposals aimed at improving optimization, while maintaining expressivity. Additionally, the activation function can significantly alter the implicit inductive bias of the architecture, controlling its non-linear behavior. In this paper, in line with previous work, we argue that evolutionary search provides a useful framework for finding new activation functions, while we also make two novel observations. The first is that modern pipelines, such as AlphaEvolve, which relies on frontier LLMs as a mutator operator, allows for a much wider and flexible search space; e.g., over all possible python functions within a certain FLOP budget, eliminating the need for manually constructed search spaces. In addition, these pipelines will be biased towards meaningful activation functions, given their ability to represent common knowledge, leading to a potentially more efficient search of the space. The second observation is that, through this framework, one can target not only performance improvements but also activation functions that encode particular inductive biases. This can be done by using performance on out-of-distribution data as a fitness function, reflecting the degree to which the architecture respects the inherent structure in the data in a manner independent of distribution shifts. We carry an empirical exploration of this proposal and show that relatively small scale synthetic datasets can be sufficient for AlphaEvolve to discover meaningful activations.
Problem

Research questions and friction points this paper is trying to address.

activation functions
generalization
inductive bias
out-of-distribution
neural architecture
Innovation

Methods, ideas, or system contributions that make the work stand out.

evolutionary search
activation functions
inductive bias
out-of-distribution generalization
LLM-based mutator
🔎 Similar Papers
No similar papers found.