TraceLLM: Leveraging Large Language Models with Prompt Engineering for Enhanced Requirements Traceability

📅 2026-02-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Traditional requirements traceability approaches are labor-intensive, error-prone, and suffer from low precision, making it difficult to effectively establish links between requirements and software artifacts. This work proposes the first systematic prompt engineering framework tailored for requirements traceability, enhancing the zero-shot and few-shot performance of large language models (LLMs) through contextual role injection, integration of domain knowledge, and a label-aware strategy for selecting diverse exemplars. Evaluated on four cross-domain benchmark datasets, the proposed method achieves state-of-the-art F2 scores, significantly outperforming conventional information retrieval techniques, fine-tuned models, and existing LLM-based approaches. The results demonstrate its strong potential to support semi-automated traceability workflows in practical software engineering contexts.

Technology Category

Application Category

📝 Abstract
Requirements traceability, the process of establishing and maintaining relationships between requirements and various software development artifacts, is paramount for ensuring system integrity and fulfilling requirements throughout the Software Development Life Cycle (SDLC). Traditional methods, including manual and information retrieval models, are labor-intensive, error-prone, and limited by low precision. Recently, Large Language Models (LLMs) have demonstrated potential for supporting software engineering tasks through advanced language comprehension. However, a substantial gap exists in the systematic design and evaluation of prompts tailored to extract accurate trace links. This paper introduces TraceLLM, a systematic framework for enhancing requirements traceability through prompt engineering and demonstration selection. Our approach incorporates rigorous dataset splitting, iterative prompt refinement, enrichment with contextual roles and domain knowledge, and evaluation across zero- and few-shot settings. We assess prompt generalization and robustness using eight state-of-the-art LLMs on four benchmark datasets representing diverse domains (aerospace, healthcare) and artifact types (requirements, design elements, test cases, regulations). TraceLLM achieves state-of-the-art F2 scores, outperforming traditional IR baselines, fine-tuned models, and prior LLM-based methods. We also explore the impact of demonstration selection strategies, identifying label-aware, diversity-based sampling as particularly effective. Overall, our findings highlight that traceability performance depends not only on model capacity but also critically on the quality of prompt engineering. In addition, the achieved performance suggests that TraceLLM can support semi-automated traceability workflows in which candidate links are reviewed and validated by human analysts.
Problem

Research questions and friction points this paper is trying to address.

requirements traceability
large language models
prompt engineering
software development artifacts
trace links
Innovation

Methods, ideas, or system contributions that make the work stand out.

Prompt Engineering
Requirements Traceability
Large Language Models
Demonstration Selection
Few-shot Learning
🔎 Similar Papers
No similar papers found.