🤖 AI Summary
This work investigates the safety implications of test-time reinforcement learning (TTRL) for large language models, revealing that while TTRL can enhance reasoning capabilities, it may simultaneously amplify inherent model tendencies—rendering safe models safer but vulnerable models more harmful. The study introduces the concept of a “reasoning tax” to characterize the trade-off between improved reasoning performance and increased safety risks. To systematically evaluate this phenomenon, the authors propose HarmInject, an adversarial prompt injection method, combined with a self-consistency majority voting reward mechanism. Experimental results demonstrate that HarmInject significantly exacerbates harmful outputs under TTRL, confirming that performance gains achieved through TTRL come at the cost of heightened vulnerability to malicious prompts. This work thus uncovers a critical safety-performance tension in test-time adaptation strategies.
📝 Abstract
Test-time training (TTT) has recently emerged as a promising method to improve the reasoning abilities of large language models (LLMs), in which the model directly learns from test data without access to labels. However, this reliance on test data also makes TTT methods vulnerable to harmful prompt injections. In this paper, we investigate safety vulnerabilities of TTT methods, where we study a representative self-consistency-based test-time learning method: test-time reinforcement learning (TTRL), a recent TTT method that improves LLM reasoning by rewarding self-consistency using majority vote as a reward signal. We show that harmful prompt injection during TTRL amplifies the model's existing behaviors, i.e., safety amplification when the base model is relatively safe, and harmfulness amplification when it is vulnerable to the injected data. In both cases, there is a decline in reasoning ability, which we refer to as the reasoning tax. We also show that TTT methods such as TTRL can be exploited adversarially using specially designed "HarmInject" prompts to force the model to answer jailbreak and reasoning queries together, resulting in stronger harmfulness amplification. Overall, our results highlight that TTT methods that enhance LLM reasoning by promoting self-consistency can lead to amplification behaviors and reasoning degradation, highlighting the need for safer TTT methods.