Model Editing as a Double-Edged Sword: Steering Agent Ethical Behavior Toward Beneficence or Harm

📅 2025-06-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the critical challenge of ethically guiding LLM-based agents to prevent real-world harms—including physical injury and economic loss—arising from unethical behavior. We propose “behavior editing,” a novel paradigm that models ethical regulation as precise, parameter-level interventions, enabling dynamic and reversible modulation of moral behavior across diverse scenarios. We introduce BehaviorBench, the first theory-driven, multi-level evaluation benchmark grounded in psychological moral theories, uniquely supporting dual assessment of both localized behavioral corrections and global moral disposition adjustments. Extensive experiments demonstrate strong generalizability and scenario adaptability across multiple state-of-the-art LLMs, while also uncovering potential misuse risks. Our core contributions are: (1) establishing model editing as a new pathway for ethical alignment; and (2) constructing the first theoretically grounded, agent-behavior-focused evaluation framework.

Technology Category

Application Category

📝 Abstract
Agents based on Large Language Models (LLMs) have demonstrated strong capabilities across a wide range of tasks. However, deploying LLM-based agents in high-stakes domains comes with significant safety and ethical risks. Unethical behavior by these agents can directly result in serious real-world consequences, including physical harm and financial loss. To efficiently steer the ethical behavior of agents, we frame agent behavior steering as a model editing task, which we term Behavior Editing. Model editing is an emerging area of research that enables precise and efficient modifications to LLMs while preserving their overall capabilities. To systematically study and evaluate this approach, we introduce BehaviorBench, a multi-tier benchmark grounded in psychological moral theories. This benchmark supports both the evaluation and editing of agent behaviors across a variety of scenarios, with each tier introducing more complex and ambiguous scenarios. We first demonstrate that Behavior Editing can dynamically steer agents toward the target behavior within specific scenarios. Moreover, Behavior Editing enables not only scenario-specific local adjustments but also more extensive shifts in an agent's global moral alignment. We demonstrate that Behavior Editing can be used to promote ethical and benevolent behavior or, conversely, to induce harmful or malicious behavior. Through comprehensive evaluations on agents based on frontier LLMs, BehaviorBench shows the effectiveness of Behavior Editing across different models and scenarios. Our findings offer key insights into a new paradigm for steering agent behavior, highlighting both the promise and perils of Behavior Editing.
Problem

Research questions and friction points this paper is trying to address.

Steering LLM-based agents' ethical behavior safely
Evaluating behavior editing via multi-tier moral benchmark
Balancing beneficial and harmful outcomes from model edits
Innovation

Methods, ideas, or system contributions that make the work stand out.

Behavior Editing for ethical agent steering
BehaviorBench multi-tier benchmark system
Dynamic local and global moral alignment
🔎 Similar Papers
No similar papers found.