🤖 AI Summary
This study addresses how social dynamics in meetings often impede inclusive feedback and undermine organizational effectiveness. To counter this, the paper proposes the first team-oriented, AI-driven socio-technical feedback system, which employs an AI agent as a mediator combined with an induced hypocrisy paradigm to prompt participants to reflect on discrepancies between their behaviors and stated values, thereby enhancing meeting inclusivity and quality. Laboratory experiments (n=28) demonstrate that the system significantly improves equity in speaking time and perceived meeting quality. However, a field study (n=10) reveals that organizational context reshapes system usage patterns—participants primarily leveraged it for individual reflection rather than collective feedback exchange—highlighting the critical role of organizational factors in the real-world deployment and adaptation of AI-supported collaboration tools.
📝 Abstract
Inclusion is important for meeting effectiveness, which is in turn central to organizational functioning. One way of improving inclusion in meetings is through feedback, but social dynamics make giving feedback difficult. We propose that AI agents can facilitate feedback exchange by being psychologically safer recipients, and we test this through a meeting system with an AI agent feedback mediator. When delivering feedback, the agent uses the Induced Hypocrisy Procedure, a social psychological technique that prompts behavior change by highlighting value-behavior inconsistencies. In a within-subjects lab study ($n=28$), the agent made speaking times more balanced and improved meeting quality. However, a field study at a small consulting firm ($n=10$) revealed organizational barriers that led to its use for personal reflection rather than feedback exchange. We contribute a novel sociotechnical system for feedback exchange in groups, and empirical findings demonstrating the importance of considering organizational barriers in designing AI tools for organizations.