🤖 AI Summary
This paper addresses the challenges of detecting and validating architectural tactics (ATs) in source code—where ATs are implicitly expressed—leading to difficulties in verification and heightened risks of architectural erosion. To tackle this, we propose Prmt4TD, a small-model-enhanced prompt engineering framework. Prmt4TD introduces a novel collaborative paradigm combining fine-tuned small models with large language model (LLM) in-context learning. It injects domain-specific AT knowledge via carefully designed, domain-adapted prompts and integrates a fine-tuned lightweight classifier with LLM-based reasoning—without full-parameter fine-tuning of the LLM. The approach achieves both high-precision AT detection and natural-language explanations. Experimental results on the balanced ATs dataset show a 13–23% improvement in F1-score, demonstrating significant gains in detection accuracy, result interpretability, and developer trustworthiness.
📝 Abstract
Architectural tactics (ATs), as the concrete implementation of architectural decisions in code, address non-functional requirements of software systems. Due to the implicit nature of architectural knowledge in code implementation, developers may risk inadvertently altering or removing these tactics during code modifications or optimizations. Such unintended changes can trigger architectural erosion, gradually undermining the system's original design. While many researchers have proposed machine learning-based methods to improve the accuracy of detecting ATs in code, the black-box nature and the required architectural domain knowledge pose significant challenges for developers in verifying the results. Effective verification requires not only accurate detection results but also interpretable explanations that enhance their comprehensibility. However, this is a critical gap in current research. Large language models (LLMs) can generate easily interpretable ATs detection comments if they have domain knowledge. Fine-tuning LLMs to acquire domain knowledge faces challenges such as catastrophic forgetting and hardware constraints. Thus, we propose Prmt4TD, a small model-augmented prompting framework to enhance the accuracy and comprehensibility of ATs detection. Combining fine-tuned small models with In-Context Learning can also reduce fine-tuning costs while equipping the LLM with additional domain knowledge. Prmt4TD can leverage the remarkable processing and reasoning capabilities of LLMs to generate easily interpretable ATs detection results. Our evaluation results demonstrate that Prmt4TD achieves accuracy (emph{F1-score}) improvement of 13%-23% on the ATs balanced dataset and enhances the comprehensibility of the detection results.