What Needs Attention? Prioritizing Drivers of Developers' Trust and Adoption of Generative AI

📅 2025-05-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Generative AI tools face limited adoption among developers due to inadequate trust calibration and high usage friction, exacerbated by design practices that overlook cognitive diversity—particularly individual differences in cognitive style—thereby undermining inclusivity. This study addresses this gap through a large-scale survey of developers (N=238), pioneering the integration of cognitive style into a generative AI trust model. Using partial least squares structural equation modeling (PLS-SEM) and importance-performance map analysis (IPMA), we uncover how system quality, functional value, and goal maintenance jointly shape trust. IPMA identifies critical underperforming yet high-impact factors—including contextual transparency and cognitive load management—as key bottlenecks. Based on these findings, we propose a prioritized design framework and actionable principles grounded in IPMA. Our work advances both theory and practice for developing trustworthy, efficient, and inclusive human-AI collaboration systems.

Technology Category

Application Category

📝 Abstract
Generative AI (genAI) tools are advertised as productivity aids. Yet, issues related to miscalibrated trust and usage friction continue to hinder their adoption. Additionally, AI can be exclusionary, failing to support diverse users adequately, further exacerbating these concerns. One such aspect of diversity is cognitive diversity -- variations in users' cognitive styles -- that leads to divergence in interaction styles. When an individual's cognitive styles are unsupported, it creates additional barriers to technology adoption. Thus, to design tools that developers trust, we must first understand what factors affect their trust and intentions to use these tools in practice? We developed a theoretical model of factors influencing trust and adoption intentions towards genAI through a large-scale survey with developers (N=238) at GitHub and Microsoft. Using Partial Least Squares-Structural Equation Modeling (PLS-SEM), we found that genAI's system/output quality, functional value, and goal maintenance significantly influence developers' trust, which along with their cognitive styles, affects their intentions to use these tools in work. An Importance-Performance Matrix Analysis (IPMA) identified factors that, despite their strong influence, underperform, revealing specific genAI aspects that need design prioritization. We bolster these findings by qualitatively analyzing developers' perceived challenges and risks of genAI usage to uncover why these gaps persist in development contexts. For genAI to indeed be a true productivity aid rather than a disguised productivity sink, it must align with developers' goals, maintain contextual transparency, reduce cognitive burden, and provide equitable interaction support. We provide practical suggestions to guide future genAI tool design for effective, trustworthy, and inclusive human-genAI interactions.
Problem

Research questions and friction points this paper is trying to address.

Identifying factors affecting developers' trust in generative AI tools
Addressing cognitive diversity barriers in AI tool adoption
Prioritizing design improvements for inclusive and effective genAI interactions
Innovation

Methods, ideas, or system contributions that make the work stand out.

PLS-SEM analyzes trust factors in genAI
IPMA identifies underperforming genAI aspects
Qualitative analysis reveals usage challenges
🔎 Similar Papers
No similar papers found.