Reading Between the Lines: Scalable User Feedback via Implicit Sentiment in Developer Prompts

📅 2025-09-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Addressing the challenge of scaling developer satisfaction assessment while preserving analytical depth—where user studies suffer from poor scalability and explicit feedback (e.g., log-based ratings) is sparse and low signal-to-noise—this paper proposes, for the first time, leveraging implicit sentiment expressed in developers’ natural-language prompts as a proxy for satisfaction. Using real-world, industrial-scale interaction logs from 372 professional developers, we apply off-the-shelf sentiment analysis techniques to extract implicit signals, successfully identifying actionable feedback in approximately 8% of interactions with strong accuracy. This approach achieves over a 13× improvement in feedback capture rate compared to conventional explicit feedback methods. By transforming ubiquitous, unstructured prompt text into quantifiable affective signals, our method bridges the gap between fine-grained behavioral insight and large-scale evaluation. It establishes a novel, empirically grounded paradigm for scalable, continuous developer experience (DevEx) analytics.

Technology Category

Application Category

📝 Abstract
Evaluating developer satisfaction with conversational AI assistants at scale is critical but challenging. User studies provide rich insights, but are unscalable, while large-scale quantitative signals from logs or in-product ratings are often too shallow or sparse to be reliable. To address this gap, we propose and evaluate a new approach: using sentiment analysis of developer prompts to identify implicit signals of user satisfaction. With an analysis of industrial usage logs of 372 professional developers, we show that this approach can identify a signal in ~8% of all interactions, a rate more than 13 times higher than explicit user feedback, with reasonable accuracy even with an off-the-shelf sentiment analysis approach. This new practical approach to complement existing feedback channels would open up new directions for building a more comprehensive understanding of the developer experience at scale.
Problem

Research questions and friction points this paper is trying to address.

Evaluating developer satisfaction with AI assistants at scale is challenging
Existing methods like user studies are unscalable while quantitative signals are unreliable
Proposing sentiment analysis of developer prompts to identify implicit satisfaction signals
Innovation

Methods, ideas, or system contributions that make the work stand out.

Using sentiment analysis of developer prompts
Identifying implicit signals of user satisfaction
Achieving higher signal rate than explicit feedback
🔎 Similar Papers
No similar papers found.