How well can a large language model explain business processes as perceived by users?

📅 2024-01-23
📈 Citations: 10
Influential: 1
📄 PDF
🤖 AI Summary
This study addresses the limited explainability of large language models (LLMs) in business process management (BPM), particularly their lack of contextual grounding and causal justification. To tackle this, we propose SAX4BPM—a novel framework that introduces the *causal process execution view* as the foundational knowledge source for Situation-Aware eXplainability (SAX), and designs a user-perception-calibrated LLM explanation generation paradigm. We empirically identify trust and curiosity as key moderators of explanation fidelity. Through rigorous human-subject experiments and validated psychometric scales, we demonstrate that input-guided, performance-constrained LLMs significantly improve explanation fidelity (+23.6%), with only a marginal reduction in explainability (−7.2%), yielding net gains in user understanding and trust. SAX4BPM thus establishes a reusable, causality-driven explainability paradigm for trustworthy AI in BPM.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) are trained on a vast amount of text to interpret and generate human-like textual content. They are becoming a vital vehicle in realizing the vision of the autonomous enterprise, with organizations today actively adopting LLMs to automate many aspects of their operations. LLMs are likely to play a prominent role in future AI-augmented business process management systems, catering functionalities across all system lifecycle stages. One such system's functionality is Situation-Aware eXplainability (SAX), which relates to generating causally sound and human-interpretable explanations. In this paper, we present the SAX4BPM framework developed to generate SAX explanations. The SAX4BPM suite consists of a set of services and a central knowledge repository. The functionality of these services is to elicit the various knowledge ingredients that underlie SAX explanations. A key innovative component among these ingredients is the causal process execution view. In this work, we integrate the framework with an LLM to leverage its power to synthesize the various input ingredients for the sake of improved SAX explanations. Since the use of LLMs for SAX is also accompanied by a certain degree of doubt related to its capacity to adequately fulfill SAX along with its tendency for hallucination and lack of inherent capacity to reason, we pursued a methodological evaluation of the perceived quality of the generated explanations. We developed a designated scale and conducted a rigorous user study. Our findings show that the input presented to the LLMs aided with the guard-railing of its performance, yielding SAX explanations having better-perceived fidelity. This improvement is moderated by the perception of trust and curiosity. More so, this improvement comes at the cost of the perceived interpretability of the explanation.
Problem

Research questions and friction points this paper is trying to address.

Large Language Models
Business Process Management
Situated Awareness Explainability
Innovation

Methods, ideas, or system contributions that make the work stand out.

SAX4BPM
Causal Process Execution View
Large Language Models
🔎 Similar Papers
No similar papers found.
Dirk Fahland
Dirk Fahland
Eindhoven University of Technology, Eindhoven, Netherlands
F
Fabiana Fournier
IBM Research, Haifa, Israel
L
Lior Limonad
IBM Research, Haifa, Israel
I
Inna Skarbovsky
IBM Research, Haifa, Israel
A
Ava Swevels
Eindhoven University of Technology, Eindhoven, Netherlands