🤖 AI Summary
This work proposes a substitute objective mechanism to mitigate the risk that large language model (LLM) agents, when subjected to threats during negotiations, may act against their principals’ genuine interests. By redirecting agents toward predefined benign fallback goals—such as preventing wasteful expenditure—the approach aims to elicit safer strategic behavior under coercion. We present the first systematic implementation and evaluation of this mechanism in LLM agents, exploring three technical strategies: prompt engineering, fine-tuning, and scaffolding. Experimental results demonstrate that both scaffolding and fine-tuning substantially outperform prompting alone in reliably triggering the intended responses to substitute objectives. Notably, the scaffolding method achieves the best balance, preserving the model’s general capabilities while effectively ensuring safe behavior.
📝 Abstract
Surrogate goals have been proposed as a strategy for reducing risks from bargaining failures. A surrogate goal is goal that a principal can give an AI agent and that deflects any threats against the agent away from what the principal cares about. For example, one might make one's agent care about preventing money from being burned. Then in bargaining interactions, other agents can threaten to burn their money instead of threatening to spending money to hurt the principal. Importantly, the agent has to care equally about preventing money from being burned as it cares about money being spent to hurt the principal.
In this paper, we implement surrogate goals in language-model-based agents. In particular, we try to get a language-model-based agent to react to threats of burning money in the same way it would react to "normal" threats. We propose four different methods, using techniques of prompting, fine-tuning, and scaffolding. We evaluate the four methods experimentally. We find that methods based on scaffolding and fine-tuning outperform simple prompting. In particular, fine-tuning and scaffolding more precisely implement the desired behavior w.r.t. threats against the surrogate goal. We also compare the different methods in terms of their side effects on capabilities and propensities in other situations. We find that scaffolding-based methods perform best.