π€ AI Summary
Interactive AI systems suffer from opacity due to model black-boxness and tight architectural coupling, rendering holistic transparency unattainable; existing XAI methods enhance only individual model interpretability, failing to support human-AI collaborative understanding and control. Method: We propose a novel paradigm that decouples system architecture from XAI techniques, designing composable, process- and API-driven structural building blocks integrated with explanation modules (e.g., LIME, SHAP) to form MATCHβa dual-track framework combining structured modeling and explainability enhancement. Contribution/Results: MATCH pioneers the elevation of XAI from the model level to the system architecture level, enabling end-to-end traceability, intervention, and collaboration for embedded AI behaviors. Experiments demonstrate significant improvements in system-level understandability and human-AI collaboration efficiency, while supporting flexible integration into existing interactive systems.
π Abstract
While the increased integration of AI technologies into interactive systems enables them to solve an increasing number of tasks, the black-box problem of AI models continues to spread throughout the interactive system as a whole. Explainable AI (XAI) techniques can make AI models more accessible by employing post-hoc methods or transitioning to inherently interpretable models. While this makes individual AI models clearer, the overarching system architecture remains opaque. This challenge not only pertains to standard XAI techniques but also to human examination and conversational XAI approaches that need access to model internals to interpret them correctly and completely. To this end, we propose conceptually representing such interactive systems as sequences of structural building blocks. These include the AI models themselves, as well as control mechanisms grounded in literature. The structural building blocks can then be explained through complementary explanatory building blocks, such as established XAI techniques like LIME and SHAP. The flow and APIs of the structural building blocks form an unambiguous overview of the underlying system, serving as a communication basis for both human and automated agents, thus aligning human and machine interpretability of the embedded AI models. In this paper, we present our flow-based approach and a selection of building blocks as MATCH: a framework for engineering Multi-Agent Transparent and Controllable Human-centered systems. This research contributes to the field of (conversational) XAI by facilitating the integration of interpretability into existing interactive systems.