Responsible Generative AI Use by Product Managers: Recoupling Ethical Principles and Practices

📅 2025-01-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study examines how product managers enact responsible innovation in generative AI contexts amid ambiguous ethical guidelines and insufficient organizational support. It identifies four key impediments: responsibility ambiguity, fragmented accountability, misaligned incentives, and leadership vacuum. To address these, the paper proposes the “ethical recoupling” theoretical framework, revealing how product managers proactively bridge the theory–practice gap through “micro-moments of responsibility”—such as autonomous data audits and team-level safety validations. Drawing on institutional theory, the research employs a mixed-methods design: 25 in-depth interviews and a global survey of 300 practitioners. It empirically demonstrates, for the first time, that mid-level managers can initiate and sustain micro-scale ethical practices without top-down mandates. The findings offer a scalable, organizationally embedded pathway for AI governance and introduce a novel theoretical paradigm for responsible innovation in AI development.

Technology Category

Application Category

📝 Abstract
Since 2022, generative AI (genAI) has rapidly become integrated into workplaces. Though organizations have made commitments to use this technology"responsibly", how organizations and their employees prioritize responsibility in their decision-making remains absent from extant management theorizing. In this paper, we examine how product managers - who often serve as gatekeepers in decision-making processes - implement responsible practices in their day-to-day work when using genAI. Using Institutional Theory, we illuminate the factors that constrain or support proactive responsible development and usage of genAI technologies. We employ a mixed methods research design, drawing on 25 interviews with product managers and a global survey of 300 respondents in product management-related roles. The majority of our respondents report (1) widespread uncertainty regarding what"responsibility"means or looks like, (2) diffused responsibility given assumed ethical actions by other teams, (3) lack of clear incentives and guidance within organizations, and (4) the importance of leadership buy-in and principles for navigating tensions between ethical commitments and profit motives. However, our study finds that even in highly uncertain environments, absent guidance from leadership, product managers can"recouple"ethical commitments and practices by finding responsibility"micro-moments". Product managers seek out low-risk, small-scale actions they can take without explicit buy-in from higher-level managers, such as individual or team-wide checks and reviews and safeguarding standards for data. Our research highlights how genAI poses unique challenges to organizations trying to couple ethical principles and daily practices and the role that middle-level management can play in recoupling the two.
Problem

Research questions and friction points this paper is trying to address.

Responsible Practices
Generative AI
Ethical Commitments
Innovation

Methods, ideas, or system contributions that make the work stand out.

Generative AI Responsibility
Micro-moments of Responsibility
Recoupling Ethical Commitments
G
Genevieve Smith
University of California, Berkeley
N
Natalia Luka
University of California, Berkeley
M
Merrick Osborne
University of California, Berkeley
B
B. Lattimore
Stanford University
J
Jessica Newman
University of California, Berkeley
B
Brandie Nonnecke
University of California, Berkeley
Brent Mittelstadt
Brent Mittelstadt
Professor of Data Ethics and Policy, University of Oxford & Principal Investigator, Weizenbaum Inst.
Data EthicsDigital EthicsTechnology PolicyBioethicsPhilosophy