Using LLMs and Essence to Support Software Practice Adoption

📅 2025-08-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Contemporary NLP and AI research predominantly focuses on code generation, offering limited support for managerial software engineering tasks—such as practice adoption and process health monitoring. To address this gap, we propose the first deep integration of Essence—a standardized framework for software engineering practices—with large language models (LLMs), implemented via a retrieval-augmented generation (RAG)-based domain-specific chatbot. The system employs four specialized LLMs to retrieve precise contextual information from a structured Essence knowledge base and generate targeted responses. Empirical evaluation demonstrates substantial improvements over general-purpose LLMs in practice-guidance tasks, with significant gains in response quality, domain accuracy, and contextual relevance. Our core contribution is the introduction of the first Essence-driven LLM paradigm for practice support—effectively bridging software engineering theory and AI—and establishing a scalable methodological foundation for intelligent process evolution.

Technology Category

Application Category

📝 Abstract
Recent advancements in natural language processing (NLP) have enabled the development of automated tools that support various domains, including software engineering. However, while NLP and artificial intelligence (AI) research has extensively focused on tasks such as code generation, less attention has been given to automating support for the adoption of best practices, the evolution of ways of working, and the monitoring of process health. This study addresses this gap by exploring the integration of Essence, a standard and thinking framework for managing software engineering practices, with large language models (LLMs). To this end, a specialised chatbot was developed to assist students and professionals in understanding and applying Essence. The chatbot employs a retrieval-augmented generation (RAG) system to retrieve relevant contextual information from a curated knowledge base. Four different LLMs were used to create multiple chatbot configurations, each evaluated both as a base model and augmented with the RAG system. The system performance was evaluated through both the relevance of retrieved context and the quality of generated responses. Comparative analysis against the general-purpose LLMs demonstrated that the proposed system consistently outperforms its baseline counterpart in domain-specific tasks. By facilitating access to structured software engineering knowledge, this work contributes to bridging the gap between theoretical frameworks and practical application, potentially improving process management and the adoption of software development practices. While further validation through user studies is required, these findings highlight the potential of LLM-based automation to enhance learning and decision-making in software engineering.
Problem

Research questions and friction points this paper is trying to address.

Automating support for adopting software engineering best practices
Integrating Essence framework with large language models
Bridging gap between theoretical frameworks and practical application
Innovation

Methods, ideas, or system contributions that make the work stand out.

Combining Essence framework with LLMs for software practices
Using RAG system to retrieve contextual knowledge
Specialized chatbot outperforms general-purpose LLMs
🔎 Similar Papers
No similar papers found.
S
Sonia Nicoletti
Department of Computer Science and Engineering, University of Bologna, Bologna, Italy
Paolo Ciancarini
Paolo Ciancarini
Università di Bologna
software engineeringartificial intelligenceentertainment computing