🤖 AI Summary
This paper identifies a critical theoretical deficit in software engineering (SE) research concerning “trust” in AI assistants: existing work frequently conflates user acceptance with trust, neglecting its rich psychological and philosophical foundations—thereby impeding reproducible secondary studies. To address this, the authors conduct an interdisciplinary literature review, systematically synthesizing trust theories from psychology and philosophy, and critically analyzing how SE, human-computer interaction (HCI), and information systems (IS) conceptualize trust. They reveal, for the first time, that SE lags significantly behind HCI and IS in theoretical maturity regarding trust. Building on established models—including initial trust formation and dynamic trust evolution—the paper proposes a novel reconceptualization framework tailored to SE, comprising conceptual evaluation criteria, guidelines for model selection, and recommendations for validated measurement instruments. This work advances SE trust research toward greater theoretical rigor, conceptual coherence, and empirical comparability.
📝 Abstract
Trust is a fundamental concept in human decision-making and collaboration that has long been studied in philosophy and psychology. However, software engineering (SE) articles often use the term 'trust' informally - providing an explicit definition or embedding results in established trust models is rare. In SE research on AI assistants, this practice culminates in equating trust with the likelihood of accepting generated content, which does not capture the full complexity of the trust concept. Without a common definition, true secondary research on trust is impossible. The objectives of our research were: (1) to present the psychological and philosophical foundations of human trust, (2) to systematically study how trust is conceptualized in SE and the related disciplines human-computer interaction and information systems, and (3) to discuss limitations of equating trust with content acceptance, outlining how SE research can adopt existing trust models to overcome the widespread informal use of the term 'trust'. We conducted a literature review across disciplines and a critical review of recent SE articles focusing on conceptualizations of trust. We found that trust is rarely defined or conceptualized in SE articles. Related disciplines commonly embed their methodology and results in established trust models, clearly distinguishing, for example, between initial trust and trust formation and discussing whether and when trust can be applied to AI assistants. Our study reveals a significant maturity gap of trust research in SE compared to related disciplines. We provide concrete recommendations on how SE researchers can adopt established trust models and instruments to study trust in AI assistants beyond the acceptance of generated software artifacts.