Controllable Complementarity: Subjective Preferences in Human-AI Collaboration

📅 2025-03-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Human-AI collaboration research has long overlooked subjective human preferences—such as sense of control and perceived enjoyment—despite their critical role in effective teamwork. Method: This work proposes explicit user controllability over AI behavior as a core mechanism to enhance complementarity and user experience. We adopt a Behavior Shaping reinforcement learning framework, enabling real-time user intervention and policy guidance during shared tasks. Contribution/Results: We introduce, for the first time, a systematic integration of subjective preferences into human-AI complementarity evaluation—moving beyond conventional objective-performance-only metrics. Empirical results demonstrate that controllable AI significantly improves users’ subjective ratings of AI effectiveness and enjoyment. Notably, AI policies retain robustness even under hidden-control conditions. These findings confirm that aligning AI behavior with human control preferences yields synergistic gains exceeding those captured by objective metrics alone.

Technology Category

Application Category

📝 Abstract
Research on human-AI collaboration often prioritizes objective performance. However, understanding human subjective preferences is essential to improving human-AI complementarity and human experiences. We investigate human preferences for controllability in a shared workspace task with AI partners using Behavior Shaping (BS), a reinforcement learning algorithm that allows humans explicit control over AI behavior. In one experiment, we validate the robustness of BS in producing effective AI policies relative to self-play policies, when controls are hidden. In another experiment, we enable human control, showing that participants perceive AI partners as more effective and enjoyable when they can directly dictate AI behavior. Our findings highlight the need to design AI that prioritizes both task performance and subjective human preferences. By aligning AI behavior with human preferences, we demonstrate how human-AI complementarity can extend beyond objective outcomes to include subjective preferences.
Problem

Research questions and friction points this paper is trying to address.

Understanding human subjective preferences in human-AI collaboration
Investigating controllability in shared workspace tasks with AI partners
Designing AI that balances task performance and human preferences
Innovation

Methods, ideas, or system contributions that make the work stand out.

Behavior Shaping for human-AI collaboration
Reinforcement learning with human control
Aligning AI behavior with subjective preferences
🔎 Similar Papers
No similar papers found.