🤖 AI Summary
This study investigates the effectiveness of ChatGPT in assisting developers with real-world software problem-solving on GitHub.
Method: Leveraging authentic developer–ChatGPT interaction logs, we construct a multidimensional analytical framework encompassing task type, project characteristics, and dialogue quality. Using mixed-method analysis—combining classification, quantitative measurement, and qualitative inspection—we empirically identify dialogue patterns that enhance large language model (LLM)-assisted development.
Contribution/Results: We find that 62% of dialogues meaningfully advance problem resolution; ChatGPT excels at code generation and tool recommendation but underperforms significantly in code explanation. Conciseness, readability, and semantic alignment emerge as key predictors of dialogue efficacy. This work introduces the first empirically grounded, development-context-aware helpfulness evaluation framework for real-world LLM–developer interactions, providing actionable insights for optimizing human–model collaboration and guiding model capability improvements.
📝 Abstract
Conversational large-language models are extensively used for issue resolution tasks. However, not all developer-LLM conversations are useful for effective issue resolution. In this paper, we analyze 686 developer-ChatGPT conversations shared within GitHub issue threads to identify characteristics that make these conversations effective for issue resolution. First, we analyze the conversations and their corresponding issues to distinguish helpful from unhelpful conversations. We begin by categorizing the types of tasks developers seek help with to better understand the scenarios in which ChatGPT is most effective. Next, we examine a wide range of conversational, project, and issue-related metrics to uncover factors associated with helpful conversations. Finally, we identify common deficiencies in unhelpful ChatGPT responses to highlight areas that could inform the design of more effective developer-facing tools. We found that only 62% of the ChatGPT conversations were helpful for successful issue resolution. ChatGPT is most effective for code generation and tools/libraries/APIs recommendations, but struggles with code explanations. Helpful conversations tend to be shorter, more readable, and exhibit stronger semantic and linguistic alignment. Larger, more popular projects and more experienced developers benefit more from ChatGPT. At the issue level, ChatGPT performs best on simpler problems with limited developer activity and faster resolution, typically well-scoped tasks like compilation errors. The most common deficiencies in unhelpful ChatGPT responses include incorrect information and lack of comprehensiveness. Our findings have wide implications including guiding developers on effective interaction strategies for issue resolution, informing the development of tools or frameworks to support optimal prompt design, and providing insights on fine-tuning LLMs for issue resolution tasks.