🤖 AI Summary
This study addresses the deployment barrier of proactive AI agents stemming from insufficient user trust, proposing a “trust-first” design paradigm. Methodologically, it integrates transparency, controllability, and explainability to establish a safety calibration mechanism comprising value alignment strategies, state visualization, uncertainty signaling, and an embedded responder simulator; further, it leverages intelligent agent workflows and contextual policy modeling to enable user pre-enactment and iterative policy refinement within simulation environments. Experimental results demonstrate significantly increased user reliance during simulation and dynamic adaptation of system behavior to task complexity and contextual shifts in real-world deployment—validating the approach’s effectiveness, practicality, and scalability. The core contribution lies in the first explicit integration of trust modeling into the proactive agent design loop, coupled with human-AI co-visualization and responder simulation to achieve tunable, calibrated dependency formation.
📝 Abstract
Agentic workflows promise efficiency, but adoption hinges on whether people actually trust systems that act on their behalf. We present DoubleAgents, an agentic planning tool that embeds transparency and control through user intervention, value-reflecting policies, rich state visualizations, and uncertainty flagging for human coordination tasks. A built-in respondent simulation generates realistic scenarios, allowing users to rehearse, refine policies, and calibrate their reliance before live use. We evaluate DoubleAgents in a two-day lab study (n=10), two deployments (n=2), and a technical evaluation. Results show that participants initially hesitated to delegate but grew more reliant as they experienced transparency, control, and adaptive learning during simulated cases. Deployment results demonstrate DoubleAgents' real-world relevance and usefulness, showing that the effort required scaled appropriately with task complexity and contextual data. We contribute trust-by-design patterns and mechanisms for proactive AI -- consistency, controllability, and explainability -- along with simulation as a safe path to build and calibrate trust over time.