🤖 AI Summary
Current AI agents generally lack the ability to discern when to proactively seek human assistance when confronted with ambiguous tasks involving missing, ambiguous, or conflicting information, leading to significant performance degradation. This work introduces HiL-Bench, a benchmark featuring human-validated blocking tasks designed to systematically evaluate agents’ selective help-seeking capabilities, along with a novel Ask-F1 metric that balances question precision and recall of blocking items to effectively suppress uninformative queries. Leveraging this metric, we further employ a reinforcement learning–based training strategy. Experimental results demonstrate that state-of-the-art models perform substantially worse in autonomous help-seeking scenarios compared to full-information settings, whereas a 32B model fine-tuned via reinforcement learning markedly improves both help-seeking quality and task success rates, while also exhibiting strong cross-domain generalization.
📝 Abstract
Frontier coding agents solve complex tasks when given complete context but collapse when specifications are incomplete or ambiguous. The bottleneck is not raw capability, but judgment: knowing when to act autonomously and when to ask for help. Current benchmarks are blind to this failure mode. They supply unambiguous detailed instructions and solely reward execution correctness, so an agent that makes a lucky guess for a missing requirement will score identically to one that would have asked to be certain. We present HiL-Bench (Human-in-the-Loop Benchmark) to measure this selective escalation skill. Each task contains human-validated blockers (missing information, ambiguous requests, contradictory information) that surface only through progressive exploration, not upfront inspection. Our core metric, Ask-F1, the harmonic mean of question precision and blocker recall, captures the tension between over-asking and silent guessing; its structure architecturally prevents gaming through question spam. Evaluation across SWE and text-to-SQL domains reveals a large universal judgment gap: no frontier model recovers more than a fraction of its full-information performance when deciding whether to ask. Failure analysis identifies three key help-seeking patterns: overconfident wrong beliefs with no gap detection; high uncertainty detection yet persistent errors; broad, imprecise escalation without self-correction. These consistent patterns confirm poor help-seeking is a model-level flaw, not task-specific. RL training on shaped Ask-F1 reward shows judgment is trainable: a 32B model improves both help-seeking quality and task pass rate, with gains that transfer across domains. The model does not learn domain-specific heuristics for when to ask; it learns to detect unresolvable uncertainty and act on it.