Evaluating Large Language Models for Detecting Architectural Decision Violations

📅 2026-02-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of detecting software architecture decision violations, which often go unnoticed due to insufficient systematic documentation and lack of automated detection mechanisms. The authors propose a multi-model large language model (LLM) collaborative pipeline: an initial LLM identifies potential violations, followed by three independent models that validate its reasoning process, with final assessment incorporating expert judgment. The approach is systematically evaluated on open-source projects to assess the accuracy, consistency, and limitations of LLMs in identifying architectural decision violations. Experimental results demonstrate that the method performs exceptionally well and consistently for explicit decisions or those inferable from source code, yet shows limited effectiveness for implicit decisions relying on deployment configurations or organizational knowledge. This study provides the first empirical delineation of the capability boundaries of LLMs in this specific software engineering task.

Technology Category

Application Category

📝 Abstract
Architectural Decision Records (ADRs) play a central role in maintaining software architecture quality, yet many decision violations go unnoticed because projects lack both systematic documentation and automated detection mechanisms. Recent advances in Large Language Models (LLMs) open up new possibilities for automating architectural reasoning at scale. We investigated how effectively LLMs can identify decision violations in open-source systems by examining their agreement, accuracy, and inherent limitations. Our study analyzed 980 ADRs across 109 GitHub repositories using a multi-model pipeline in which one LLM primary screens potential decision violations, and three additional LLMs independently validate the reasoning. We assessed agreement, accuracy, precision, and recall, and complemented the quantitative findings with expert evaluation. The models achieved substantial agreement and strong accuracy for explicit, code-inferable decisions. Accuracy falls short for implicit or deployment-oriented decisions that depend on deployment configuration or organizational knowledge. Therefore, LLMs can meaningfully support validation of architectural decision compliance; however, they are not yet replacing human expertise for decisions not focused on code.
Problem

Research questions and friction points this paper is trying to address.

Architectural Decision Records
Decision Violations
Large Language Models
Software Architecture
Automated Detection
Innovation

Methods, ideas, or system contributions that make the work stand out.

Large Language Models
Architectural Decision Records
Decision Violation Detection
Multi-model Validation
Software Architecture Compliance
🔎 Similar Papers
No similar papers found.