🤖 AI Summary
This study addresses the limitations of current AI programming tools in integrated development environments (IDEs), which predominantly rely on passive prompting and lack proactive anticipation of developer needs, with the optimal timing for proactive assistance remaining unclear. Through a five-day field study involving 15 developers using a production-grade IDE equipped with a proactive AI assistant, we examined acceptance rates and cognitive load associated with code suggestions across different workflow phases. Findings reveal that interventions at workflow boundaries—such as after code commits—achieved a 52% engagement rate, significantly higher than during-task interruptions, which faced a 62% rejection rate. Moreover, appropriately timed proactive suggestions required substantially less cognitive processing time (45.4 seconds) compared to reactive ones (101.4 seconds; p = 0.0016, r = 0.533). These results uncover systematic patterns in developers’ receptivity to proactive AI and provide empirical grounding for balancing AI initiative with user control.
📝 Abstract
Current in-IDE AI coding tools typically rely on time-consuming manual prompting and context management, whereas proactive alternatives that anticipate developer needs without explicit invocation remain underexplored. Understanding when humans are receptive to such proactive AI assistance during their daily work remains an open question in human-AI interaction research. We address this gap through a field study of proactive AI assistance in professional developer workflows. We present a five-day in-the-wild study with 15 developers who interacted with a proactive feature of an AI assistant integrated into a production-grade IDE that offers code quality suggestions based on in-IDE developer activity. We examined 229 AI interventions across 5,732 interaction points to understand how proactive suggestions are received across workflow stages, how developers experience them, and their perceived impact. Our findings reveal systematic patterns in human receptivity to proactive suggestions: interventions at workflow boundaries (e.g., post-commit) achieved 52% engagement rates, while mid-task interventions (e.g., on declined edit) were dismissed 62% of the time. Notably, well-timed proactive suggestions required significantly less interpretation time than reactive suggestions (45.4s versus 101.4s, W = 109.00, r = 0.533, p = 0.0016), indicating enhanced cognitive alignment. This study provides actionable implications for designing proactive coding assistants, including how to time interventions, align them with developer context, and strike a balance between AI agency and user control in production IDEs.