🤖 AI Summary
This study investigates the practical impact and integration challenges of large language models (LLMs) in real-world software development. Through in-depth interviews with 11 experienced developers—and employing thematic coding and multi-expert consensus analysis—it systematically examines LLM effectiveness across core tasks: code generation, debugging, and documentation. Diverging from prior work, this research quantifies LLM-driven productivity gains based on frontline practitioners’ consensus for the first time, while concurrently identifying critical risks: insufficient code trustworthiness, overreliance, copyright compliance concerns, and ambiguous accountability. Based on these findings, the paper proposes a “Responsible Integration” framework comprising three pillars: clearly defined human-AI collaboration boundaries, rigorous output verification protocols, and structured ethical review criteria. The framework delivers actionable guidance and governance pathways for industrial LLM adoption. (149 words)
📝 Abstract
The introduction of transformer architecture was a turning point in Natural Language Processing (NLP). Models based on the transformer architecture such as Bidirectional Encoder Representations from Transformers (BERT) and Generative Pre-Trained Transformer (GPT) have gained widespread popularity in various applications such as software development and education. The availability of Large Language Models (LLMs) such as ChatGPT and Bard to the general public has showcased the tremendous potential of these models and encouraged their integration into various domains such as software development for tasks such as code generation, debugging, and documentation generation. In this study, opinions from 11 experts regarding their experience with LLMs for software development have been gathered and analysed to draw insights that can guide successful and responsible integration. The overall opinion of the experts is positive, with the experts identifying advantages such as increase in productivity and reduced coding time. Potential concerns and challenges such as risk of over-dependence and ethical considerations have also been highlighted.