🤖 AI Summary
This work addresses the challenge of accurately estimating extremely low probabilities of harmful model outputs—such as safety violations—in large language models, where conventional random sampling fails due to insufficient sample efficiency for rare-event detection. We propose the first systematic comparison of importance sampling versus activation extrapolation for rare-event probability estimation, demonstrating that importance sampling achieves orders-of-magnitude higher accuracy. Furthermore, we formulate the minimization of rare harmful behavior probability as an explicitly optimizable adversarial training objective—a novel contribution. Empirical validation is conducted on small-scale Transformer models, incorporating argmax sampling, input-space-directed importance sampling, and logit-fitted activation extrapolation. Our results establish a new probabilistic worst-case performance guarantee paradigm, providing a quantifiable and optimization-friendly framework for assessing rare risks—thereby advancing the foundation of trustworthy AI.
📝 Abstract
We consider the problem of low probability estimation: given a machine learning model and a formally-specified input distribution, how can we estimate the probability of a binary property of the model's output, even when that probability is too small to estimate by random sampling? This problem is motivated by the need to improve worst-case performance, which distribution shift can make much more likely. We study low probability estimation in the context of argmax sampling from small transformer language models. We compare two types of methods: importance sampling, which involves searching for inputs giving rise to the rare output, and activation extrapolation, which involves extrapolating a probability distribution fit to the model's logits. We find that importance sampling outperforms activation extrapolation, but both outperform naive sampling. Finally, we explain how minimizing the probability estimate of an undesirable behavior generalizes adversarial training, and argue that new methods for low probability estimation are needed to provide stronger guarantees about worst-case performance.