π€ AI Summary
This study investigates the generalization capability and noise robustness of Transformers when learning Boolean functions from noisy features. Addressing the tendency of Transformers to converge to suboptimal solutions in noisy settings due to their implicit bias toward low-complexity functions, the work proposes a training strategy that incorporates a sensitivity-based regularization term. The approach is evaluated on benchmark tasks including k-sparse parity, majority functions, and random k-juntas, using sensitivity analysis and a regularized loss function. Experimental results show that Transformers outperform LSTMs on certain tasks but exhibit weaker performance on random k-juntas. Crucially, the introduction of sensitivity-aware regularization significantly enhances the modelβs robustness to noise, thereby validating the effectiveness of the proposed method.
π Abstract
Noise is ubiquitous in data used to train large language models, but it is not well understood whether these models are able to correctly generalize to inputs generated without noise. Here, we study noise-robust learning: are transformers trained on data with noisy features able to find a target function that correctly predicts labels for noiseless features? We show that transformers succeed at noise-robust learning for a selection of $k$-sparse parity and majority functions, compared to LSTMs which fail at this task for even modest feature noise. However, we find that transformers typically fail at noise-robust learning of random $k$-juntas, especially when the boolean sensitivity of the optimal solution is smaller than that of the target function. We argue that this failure is due to a combination of two factors: transformers'bias toward simpler functions, combined with an observation that the optimal function for noise-robust learning typically has lower sensitivity than the target function for random boolean functions. We test this hypothesis by exploiting transformers'simplicity bias to trap them in an incorrect solution, but show that transformers can escape this trap by training with an additional loss term penalizing high-sensitivity solutions. Overall, we find that transformers are particularly ineffective for learning boolean functions in the presence of feature noise.