Learning to Coordinate with Experts

📅 2025-02-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the core challenge of enabling AI agents to autonomously decide “when to act versus when to seek expert assistance” in dynamic environments. We propose YRC (Learning to Yield and Request Control), a novel paradigm wherein agents are trained without expert interaction but must adapt online during deployment to environmental dynamics and asynchronous expert interventions. Methodologically, we introduce YRC-Bench—the first open-source, multi-domain benchmark for this problem—and design an adaptive control policy grounded in reinforcement learning and online validation, supporting both simulated expert interfaces and Gym-style APIs. Experiments demonstrate that YRC systematically exposes critical trade-offs among generalization capability, expert response latency, and policy robustness. Across diverse environments, it consistently outperforms baselines, establishing a reproducible evaluation standard and empirical foundation for safe, scalable human-AI collaboration.

Technology Category

Application Category

📝 Abstract
When deployed in dynamic environments, AI agents will inevitably encounter challenges that exceed their individual capabilities. Leveraging assistance from expert agents-whether human or AI-can significantly enhance safety and performance in such situations. However, querying experts is often costly, necessitating the development of agents that can efficiently request and utilize expert guidance. In this paper, we introduce a fundamental coordination problem called Learning to Yield and Request Control (YRC), where the objective is to learn a strategy that determines when to act autonomously and when to seek expert assistance. We consider a challenging practical setting in which an agent does not interact with experts during training but must adapt to novel environmental changes and expert interventions at test time. To facilitate empirical research, we introduce YRC-Bench, an open-source benchmark featuring diverse domains. YRC-Bench provides a standardized Gym-like API, simulated experts, evaluation pipeline, and implementation of competitive baselines. Towards tackling the YRC problem, we propose a novel validation approach and investigate the performance of various learning methods across diverse environments, yielding insights that can guide future research.
Problem

Research questions and friction points this paper is trying to address.

AI agents face challenges exceeding capabilities
Costly to query experts for assistance
Learn when to act or seek help
Innovation

Methods, ideas, or system contributions that make the work stand out.

Learning to Yield Control
YRC-Bench benchmark
Novel validation approach
🔎 Similar Papers
No similar papers found.