🤖 AI Summary
This work addresses the challenge of detecting software architecture decision violations, which often go unnoticed due to insufficient systematic documentation and lack of automated detection mechanisms. The authors propose a multi-model large language model (LLM) collaborative pipeline: an initial LLM identifies potential violations, followed by three independent models that validate its reasoning process, with final assessment incorporating expert judgment. The approach is systematically evaluated on open-source projects to assess the accuracy, consistency, and limitations of LLMs in identifying architectural decision violations. Experimental results demonstrate that the method performs exceptionally well and consistently for explicit decisions or those inferable from source code, yet shows limited effectiveness for implicit decisions relying on deployment configurations or organizational knowledge. This study provides the first empirical delineation of the capability boundaries of LLMs in this specific software engineering task.
📝 Abstract
Architectural Decision Records (ADRs) play a central role in maintaining software architecture quality, yet many decision violations go unnoticed because projects lack both systematic documentation and automated detection mechanisms. Recent advances in Large Language Models (LLMs) open up new possibilities for automating architectural reasoning at scale. We investigated how effectively LLMs can identify decision violations in open-source systems by examining their agreement, accuracy, and inherent limitations. Our study analyzed 980 ADRs across 109 GitHub repositories using a multi-model pipeline in which one LLM primary screens potential decision violations, and three additional LLMs independently validate the reasoning. We assessed agreement, accuracy, precision, and recall, and complemented the quantitative findings with expert evaluation. The models achieved substantial agreement and strong accuracy for explicit, code-inferable decisions. Accuracy falls short for implicit or deployment-oriented decisions that depend on deployment configuration or organizational knowledge. Therefore, LLMs can meaningfully support validation of architectural decision compliance; however, they are not yet replacing human expertise for decisions not focused on code.