Mapping the Trust Terrain: LLMs in Software Engineering -- Insights and Perspectives

📅 2025-03-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) exhibit conceptual ambiguity and poorly defined boundaries regarding trust-related constructs—such as trust, distrust, and trustworthiness—in software engineering (SE), hindering rigorous theoretical development and practical adoption. Method: We conduct a systematic literature review (SLR), synthesizing 88 peer-reviewed studies and interviewing 25 domain experts. Contribution/Results: We present the first SE-specific LLM trust concept map, derived through cross-domain conceptual mapping and empirical analysis, which clarifies the semantics and contextual boundaries of core trust constructs across the SE lifecycle. Our study uncovers a significant cognition gap between academic research and industrial practice, proposes a structured trust framework, identifies six critical research gaps, and delivers a comprehensive trust research roadmap spanning requirements, development, testing, and operations. This work establishes foundational support for both theoretical advancement and engineering deployment of trustworthy LLMs in SE.

Technology Category

Application Category

📝 Abstract
Applications of Large Language Models (LLMs) are rapidly growing in industry and academia for various software engineering (SE) tasks. As these models become more integral to critical processes, ensuring their reliability and trustworthiness becomes essential. Consequently, the concept of trust in these systems is becoming increasingly critical. Well-calibrated trust is important, as excessive trust can lead to security vulnerabilities, and risks, while insufficient trust can hinder innovation. However, the landscape of trust-related concepts in LLMs in SE is relatively unclear, with concepts such as trust, distrust, and trustworthiness lacking clear conceptualizations in the SE community. To bring clarity to the current research status and identify opportunities for future work, we conducted a comprehensive review of $88$ papers: a systematic literature review of $18$ papers focused on LLMs in SE, complemented by an analysis of 70 papers from broader trust literature. Additionally, we conducted a survey study with 25 domain experts to gain insights into practitioners' understanding of trust and identify gaps between existing literature and developers' perceptions. The result of our analysis serves as a roadmap that covers trust-related concepts in LLMs in SE and highlights areas for future exploration.
Problem

Research questions and friction points this paper is trying to address.

Ensuring reliability and trustworthiness of LLMs in software engineering.
Clarifying trust-related concepts in LLMs for software engineering tasks.
Identifying gaps between literature and developer perceptions of trust.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Systematic literature review of LLMs in SE
Survey study with 25 domain experts
Analysis of 70 papers on trust literature
🔎 Similar Papers
No similar papers found.