🤖 AI Summary
Autonomous Business Processes (ABPs) enhance operational efficiency and responsiveness but suffer from critical challenges—including lack of trust, debugging complexity, ambiguous accountability, algorithmic bias, and regulatory noncompliance. To address these, this paper introduces the eXplainable Autonomous Business Processes (XABPs) framework. It systematically defines explanation modalities, structures explainability requirements specific to business process management (BPM), and identifies core explainability challenges in ABP contexts. By integrating AI/ML with BPM technologies, the framework embeds self-explaining decision logic to ensure transparent reasoning and end-to-end traceability. We establish a theoretical foundation and taxonomy for XABPs, delineate key technical pathways for trustworthy deployment, and propose evaluation dimensions for explainability. The work provides both theoretical grounding and methodological support for standardizing explainability and advancing its engineering practice in autonomous process systems.
📝 Abstract
Autonomous business processes (ABPs), i.e., self-executing workflows leveraging AI/ML, have the potential to improve operational efficiency, reduce errors, lower costs, improve response times, and free human workers for more strategic and creative work. However, ABPs may raise specific concerns including decreased stakeholder trust, difficulties in debugging, hindered accountability, risk of bias, and issues with regulatory compliance. We argue for eXplainable ABPs (XABPs) to address these concerns by enabling systems to articulate their rationale. The paper outlines a systematic approach to XABPs, characterizing their forms, structuring explainability, and identifying key BPM research challenges towards XABPs.