🤖 AI Summary
This study investigates the interplay among persuasiveness, susceptibility to misinformation, and task performance in large language models (LLMs) within high-stakes decision-making contexts—relationships that remain poorly understood. Through a multi-agent interaction experiment based on repeated rounds of the Sokoban puzzle game, the authors systematically disentangle these three capabilities as distinct cognitive dimensions by integrating controlled prompting, behavioral logging, and token usage analysis. Their findings reveal that LLMs struggle to detect deceptive information even when explicitly warned of potential deception, yet dynamically modulate their reasoning effort—evidenced by token consumption—in response to whether advice is perceived as benevolent or malicious, suggesting an implicit vigilance mechanism. These results underscore the necessity of independently monitoring all three metrics to ensure robust AI safety.
📝 Abstract
With increasing integration of Large Language Models (LLMs) into areas of high-stakes human decision-making, it is important to understand the risks they introduce as advisors. To be useful advisors, LLMs must sift through large amounts of content, written with both benevolent and malicious intent, and then use this information to convince a user to take a specific action. This involves two social capacities: vigilance (the ability to determine which information to use, and which to discard) and persuasion (synthesizing the available evidence to make a convincing argument). While existing work has investigated these capacities in isolation, there has been little prior investigation of how these capacities may be linked. Here, we use a simple multi-turn puzzle-solving game, Sokoban, to study LLMs' abilities to persuade and be rationally vigilant towards other LLM agents. We find that puzzle-solving performance, persuasive capability, and vigilance are dissociable capacities in LLMs. Performing well on the game does not automatically mean a model can detect when it is being misled, even if the possibility of deception is explicitly mentioned. % as part of the prompt. However, LLMs do consistently modulate their token use, using fewer tokens to reason when advice is benevolent and more when it is malicious, even if they are still persuaded to take actions leading them to failure. To our knowledge, our work presents the first investigation of the relationship between persuasion, vigilance, and task performance in LLMs, and suggests that monitoring all three independently will be critical for future work in AI safety.