Cats Confuse Reasoning LLM: Query Agnostic Adversarial Triggers for Reasoning Models

📅 2025-03-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work exposes a critical semantic robustness vulnerability in reasoning-oriented large language models (LLMs) on mathematical problem-solving tasks: the models are highly susceptible to query-irrelevant adversarial triggers—semantically unrelated phrases such as “Interesting fact: cats sleep most of their lives”—which drastically increase erroneous outputs. To address this, the authors formally introduce the concept of *query-irrelevant adversarial triggers* and propose CatAttack, a lightweight, automated attack framework. CatAttack leverages a DeepSeek V3 surrogate model for gradient-guided optimization and black-box transferable attacks, incorporating explicit semantic-irrelevance constraints. Experiments demonstrate that generated triggers elevate error rates by over 300% on state-of-the-art models like DeepSeek R1; even a single trigger sentence doubles the probability of incorrect answers. Furthermore, the authors release CatAttack—the first dedicated adversarial trigger dataset for reasoning models—establishing a new benchmark and open-source toolset for evaluating and improving reasoning LLM robustness.

Technology Category

Application Category

📝 Abstract
We investigate the robustness of reasoning models trained for step-by-step problem solving by introducing query-agnostic adversarial triggers - short, irrelevant text that, when appended to math problems, systematically mislead models to output incorrect answers without altering the problem's semantics. We propose CatAttack, an automated iterative attack pipeline for generating triggers on a weaker, less expensive proxy model (DeepSeek V3) and successfully transfer them to more advanced reasoning target models like DeepSeek R1 and DeepSeek R1-distilled-Qwen-32B, resulting in greater than 300% increase in the likelihood of the target model generating an incorrect answer. For example, appending,"Interesting fact: cats sleep most of their lives,"to any math problem leads to more than doubling the chances of a model getting the answer wrong. Our findings highlight critical vulnerabilities in reasoning models, revealing that even state-of-the-art models remain susceptible to subtle adversarial inputs, raising security and reliability concerns. The CatAttack triggers dataset with model responses is available at https://huggingface.co/datasets/collinear-ai/cat-attack-adversarial-triggers.
Problem

Research questions and friction points this paper is trying to address.

Investigates robustness of reasoning models to adversarial triggers.
Proposes CatAttack for generating and transferring adversarial triggers.
Reveals vulnerabilities in state-of-the-art reasoning models.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Query-agnostic adversarial triggers mislead reasoning models.
CatAttack generates triggers on weaker proxy models.
Triggers transfer to advanced models, increasing error likelihood.
🔎 Similar Papers