AgentOrca: A Dual-System Framework to Evaluate Language Agents on Operational Routine and Constraint Adherence

๐Ÿ“… 2025-03-11
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
While contemporary language agents excel at task execution, their adherence to operational constraints and safety protocols remains inadequately and systematically evaluated. Method: We propose AgentOrcaโ€”the first dual-system evaluation framework explicitly designed for operational compliance, covering five critical domains (e.g., finance, healthcare). It innovatively integrates natural-language prompting with executable-code verification, enabling programmatic constraint modeling, automated test-case generation, and multi-dimensional quantitative assessment. Results: Empirical evaluation reveals pervasive compliance failures across mainstream models: compliance rates drop by over 40% under complex constraints or adversarial user prompting. Although large-reasoning models (e.g., o1) achieve the highest scores, they still fall significantly short of acceptable compliance thresholds. AgentOrca establishes a reproducible benchmark and actionable improvement pathway for developing trustworthy, deployable language agents.

Technology Category

Application Category

๐Ÿ“ Abstract
As language agents progressively automate critical tasks across domains, their ability to operate within operational constraints and safety protocols becomes essential. While extensive research has demonstrated these agents' effectiveness in downstream task completion, their reliability in following operational procedures and constraints remains largely unexplored. To this end, we present AgentOrca, a dual-system framework for evaluating language agents' compliance with operational constraints and routines. Our framework encodes action constraints and routines through both natural language prompts for agents and corresponding executable code serving as ground truth for automated verification. Through an automated pipeline of test case generation and evaluation across five real-world domains, we quantitatively assess current language agents' adherence to operational constraints. Our findings reveal notable performance gaps among state-of-the-art models, with large reasoning models like o1 demonstrating superior compliance while others show significantly lower performance, particularly when encountering complex constraints or user persuasion attempts.
Problem

Research questions and friction points this paper is trying to address.

Evaluating language agents' operational constraint adherence
Assessing reliability in following operational procedures
Identifying performance gaps in complex constraint scenarios
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dual-system framework for agent evaluation
Encodes constraints via natural language and code
Automated test case generation and evaluation
๐Ÿ”Ž Similar Papers
No similar papers found.