To Ask or Not to Ask: Learning to Require Human Feedback

📅 2025-10-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing decision-support systems (e.g., Learning to Defer) treat human experts and predictive models as mutually exclusive decision-makers, permitting experts to provide only final predictions—thus precluding fine-grained human-AI collaboration. Method: We propose Learning to Ask (LtA), the first framework to systematically model two core questions: *when* to query an expert and *how* to incorporate their feedback. LtA departs from the exclusive-decision paradigm via a dual-module architecture supporting both sequential and joint training, and introduces a surrogate loss function with formal consistency guarantees. Contribution/Results: Extensive experiments on synthetic and real-world expert-annotated datasets demonstrate that LtA significantly improves collaborative accuracy and robustness over baselines. It enables more flexible, interpretable, and verifiable human-AI co-decision making for classification tasks—advancing beyond rigid deferral-based paradigms toward adaptive, feedback-driven collaboration.

Technology Category

Application Category

📝 Abstract
Developing decision-support systems that complement human performance in classification tasks remains an open challenge. A popular approach, Learning to Defer (LtD), allows a Machine Learning (ML) model to pass difficult cases to a human expert. However, LtD treats humans and ML models as mutually exclusive decision-makers, restricting the expert contribution to mere predictions. To address this limitation, we propose Learning to Ask (LtA), a new framework that handles both when and how to incorporate expert input in an ML model. LtA is based on a two-part architecture: a standard ML model and an enriched model trained with additional expert human feedback, with a formally optimal strategy for selecting when to query the enriched model. We provide two practical implementations of LtA: a sequential approach, which trains the models in stages, and a joint approach, which optimises them simultaneously. For the latter, we design surrogate losses with realisable-consistency guarantees. Our experiments with synthetic and real expert data demonstrate that LtA provides a more flexible and powerful foundation for effective human-AI collaboration.
Problem

Research questions and friction points this paper is trying to address.

Developing systems that complement human performance in classification tasks
Addressing limitations of Learning to Defer by incorporating expert feedback
Handling when and how to integrate human input into ML models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Proposes Learning to Ask framework for human-AI collaboration
Uses two-part architecture with enriched model and feedback
Implements sequential and joint training approaches with guarantees
🔎 Similar Papers
No similar papers found.