Multi-Round Human-AI Collaboration with User-Specified Requirements

📅 2026-02-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the central challenge in high-stakes, multi-turn human-AI collaboration: balancing the avoidance of counterfactual harm—defined as not undermining human strengths—with achieving complementarity by compensating for human error-prone behaviors. The paper proposes the first user-centered collaborative framework that enables users to formalize these dual objectives as executable constraints through customizable rules. It introduces an online algorithm that dynamically satisfies these constraints during interaction without requiring explicit modeling of human behavior or assumptions about its distribution. Theoretical analysis provides finite-sample guarantees, and experiments on medical diagnosis and visual reasoning tasks demonstrate that the method reliably controls constraint violation rates while predictably improving human decision accuracy through adjustable constraint strength.

Technology Category

Application Category

📝 Abstract
As humans increasingly rely on multiround conversational AI for high stakes decisions, principled frameworks are needed to ensure such interactions reliably improve decision quality. We adopt a human centric view governed by two principles: counterfactual harm, ensuring the AI does not undermine human strengths, and complementarity, ensuring it adds value where the human is prone to err. We formalize these concepts via user defined rules, allowing users to specify exactly what harm and complementarity mean for their specific task. We then introduce an online, distribution free algorithm with finite sample guarantees that enforces the user-specified constraints over the collaboration dynamics. We evaluate our framework across two interactive settings: LLM simulated collaboration on a medical diagnostic task and a human crowdsourcing study on a pictorial reasoning task. We show that our online procedure maintains prescribed counterfactual harm and complementarity violation rates even under nonstationary interaction dynamics. Moreover, tightening or loosening these constraints produces predictable shifts in downstream human accuracy, confirming that the two principles serve as practical levers for steering multi-round collaboration toward better decision quality without the need to model or constrain human behavior.
Problem

Research questions and friction points this paper is trying to address.

counterfactual harm
complementarity
human-AI collaboration
multi-round interaction
decision quality
Innovation

Methods, ideas, or system contributions that make the work stand out.

counterfactual harm
complementarity
user-specified constraints
online algorithm
human-AI collaboration
🔎 Similar Papers
No similar papers found.