Do We Always Need the Simplicity Bias? Looking for Optimal Inductive Biases in the Wild

📅 2025-03-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work challenges the prevailing assumption of ReLU-induced “simplicity bias” in neural networks, demonstrating its frequent detrimental impact on performance in non-vision tasks—including tabular learning, regression, shortcut learning, and algorithmic reasoning (e.g., grokking). To address this, we propose a gradient-based meta-learning framework that dynamically synthesizes task-adaptive, customizable activation functions. We further introduce a complexity-aware methodology for analyzing inductive biases. Our study is the first to systematically expose the failure of simplicity bias in these domains, showing that incorporating higher-complexity priors significantly improves generalization. Experiments confirm consistent gains over ReLU and GeLU baselines across diverse non-image tasks, while maintaining near-optimal accuracy on image classification—thereby validating our core thesis: inductive biases must be dynamically tailored to task-specific characteristics.

Technology Category

Application Category

📝 Abstract
Neural architectures tend to fit their data with relatively simple functions. This"simplicity bias"is widely regarded as key to their success. This paper explores the limits of this principle. Building on recent findings that the simplicity bias stems from ReLU activations [96], we introduce a method to meta-learn new activation functions and inductive biases better suited to specific tasks. Findings: We identify multiple tasks where the simplicity bias is inadequate and ReLUs suboptimal. In these cases, we learn new activation functions that perform better by inducing a prior of higher complexity. Interestingly, these cases correspond to domains where neural networks have historically struggled: tabular data, regression tasks, cases of shortcut learning, and algorithmic grokking tasks. In comparison, the simplicity bias induced by ReLUs proves adequate on image tasks where the best learned activations are nearly identical to ReLUs and GeLUs. Implications: Contrary to popular belief, the simplicity bias of ReLU networks is not universally useful. It is near-optimal for image classification, but other inductive biases are sometimes preferable. We showed that activation functions can control these inductive biases, but future tailored architectures might provide further benefits. Advances are still needed to characterize a model's inductive biases beyond"complexity", and their adequacy with the data.
Problem

Research questions and friction points this paper is trying to address.

Explores limits of simplicity bias in neural networks.
Introduces method to meta-learn task-specific activation functions.
Identifies tasks where simplicity bias is inadequate.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Meta-learns new activation functions for tasks
Identifies tasks where simplicity bias fails
Tailors inductive biases to specific data types
🔎 Similar Papers
No similar papers found.