π€ AI Summary
Current single-turn safety evaluations struggle to capture novel scam risks that large language models (LLMs) encounter in multi-turn dialogues. This work proposes a controlled LLM-vs-LLM simulation framework that integrates multilingual (Chinese/English) red-teaming attacks, quantitative dialogue outcome metrics, and human-annotated strategic labels to systematically analyze attack patterns, defense mechanisms, and failure modes in multi-turn scam scenarios. The study reveals, for the first time, an intrinsic link between progressive persuasive strategies and model interaction failures, establishing βmulti-turn interactive safetyβ as a critical and distinct dimension of LLM behavior. Evaluations across eight mainstream LLMs demonstrate that defensive failures primarily stem from misfired safety guardrails and role instability during extended interactions.
π Abstract
As LLMs gain persuasive agentic capabilities through extended dialogues, they introduce novel risks in multi-turn conversational scams that single-turn safety evaluations fail to capture. We systematically study these risks using a controlled LLM-to-LLM simulation framework across multi-turn scam scenarios. Evaluating eight state-of-the-art models in English and Chinese, we analyze dialogue outcomes and qualitatively annotate attacker strategies, defensive responses, and failure modes. Results reveal that scam interactions follow recurrent escalation patterns, while defenses employ verification and delay mechanisms. Furthermore, interactional failures frequently stem from safety guardrail activation and role instability. Our findings highlight multi-turn interactional safety as a critical, distinct dimension of LLM behavior.