🤖 AI Summary
A lack of empirical evidence exists regarding developers’ AI support needs and the responsible design of such tools. Method: Grounded in cognitive appraisal theory, this study employs a mixed-methods approach—surveying 860 software developers and conducting in-depth interviews—to empirically map task contexts to AI adoption behaviors. Contribution/Results: We identify three critical boundaries for AI support: (1) augmenting core development activities (e.g., coding, testing), (2) automating repetitive tasks (e.g., documentation, operations), and (3) avoiding interpersonal- and identity-sensitive tasks. Based on these findings, we propose a task-type-driven framework for differentiated AI support strategies and responsibility prioritization, specifying context-specific design principles for trustworthiness, explainability, and controllability. This work establishes both a theoretical foundation and practical guidelines for developing responsible AI systems tailored to software engineering.
📝 Abstract
Generative AI is reshaping software work, yet we lack clear guidance on where developers most need and want support, and how to design it responsibly. We report a large-scale, mixed-methods study of N=860 developers that examines where, why, and how they seek or limit AI help, providing the first task-aware, empirically validated mapping from developers' perceptions of their tasks to AI adoption patterns and responsible AI priorities. Using cognitive appraisal theory, we show that task evaluations predict openness to and use of AI, revealing distinct patterns: strong current use and a desire for improvement in core work (e.g., coding, testing); high demand to reduce toil (e.g., documentation, operations); and clear limits for identity- and relationship-centric work (e.g., mentoring). Priorities for responsible AI support vary by context: reliability and security for systems-facing tasks; transparency, alignment, and steerability to maintain control; and fairness and inclusiveness for human-facing work. Our results offer concrete, contextual guidance for delivering AI where it matters to developers and their work.