Controlling AI Agent Participation in Group Conversations: A Human-Centered Approach

📅 2025-01-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the challenge of calibrating AI agent participation in multi-person brainstorming: how to enhance collaborative efficiency without disrupting human interaction or compromising user agency. Through two rounds of human factors experiments, qualitative interviews, and iterative prototyping, we propose the first taxonomy for AI agent behavioral control in group dialogue, spanning five dimensions—response timing, content generation, spatial positioning, locus of control, and implementation modality. Results reveal a strong user preference for AI as supportive rather than directive; real-time, co-located, and collaborative control interfaces significantly improve perceived acceptability and usability. Our work advances a human-centered design paradigm for hybrid proactive dialogue agents, delivering both an empirically grounded, actionable taxonomy and design principles validated through rigorous user studies. (149 words)

Technology Category

Application Category

📝 Abstract
Conversational AI agents are commonly applied within single-user, turn-taking scenarios. The interaction mechanics of these scenarios are trivial: when the user enters a message, the AI agent produces a response. However, the interaction dynamics are more complex within group settings. How should an agent behave in these settings? We report on two experiments aimed at uncovering users' experiences of an AI agent's participation within a group, in the context of group ideation (brainstorming). In the first study, participants benefited from and preferred having the AI agent in the group, but participants disliked when the agent seemed to dominate the conversation and they desired various controls over its interactive behaviors. In the second study, we created functional controls over the agent's behavior, operable by group members, to validate their utility and probe for additional requirements. Integrating our findings across both studies, we developed a taxonomy of controls for when, what, and where a conversational AI agent in a group should respond, who can control its behavior, and how those controls are specified and implemented. Our taxonomy is intended to aid AI creators to think through important considerations in the design of mixed-initiative conversational agents.
Problem

Research questions and friction points this paper is trying to address.

AI Assistant Design
Team Communication
User Acceptance
Innovation

Methods, ideas, or system contributions that make the work stand out.

AI Behavior Guidelines
Group Chat Management
User Acceptance in Team Brainstorming
🔎 Similar Papers
2024-07-03arXiv.orgCitations: 2
S
Stephanie Houde
IBM Research, USA
K
Kristina Brimijoin
IBM Research, USA
M
Michael J. Muller
IBM Research, USA
S
Steven I. Ross
IBM Research, USA
D
Dario Andres Silva Moran
IBM Research, Argentina
G
Gabriel Enrique Gonzalez
IBM Research, Argentina
S
Siya Kunde
IBM Research, USA
M
Morgan A. Foreman
IBM Research, USA
Justin D. Weisz
Justin D. Weisz
Manager, Senior Research Scientist, and Strategy Lead, IBM Research
Human-Centered AIHCICSCWSocial Psychology