Comparing verbal, visual and combined explanations for Bayesian Network inferences

📅 2025-11-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Although Bayesian networks (BNs) possess theoretical interpretability, their inference processes remain opaque to users; existing interfaces inadequately convey observational impacts, propagation paths, and underlying causal mechanisms among variables. Method: This study introduces three multimodal interface extensions—verbal explanations, visualizations, and their synergistic integration—evaluated systematically through UI design, dynamic BN visualization, natural language generation, and controlled cognitive experiments. Contribution/Results: All extensions significantly outperformed conventional baseline interfaces. Notably, the verbal–visual synergistic interface substantially improved user comprehension accuracy (+27%) and reasoning efficiency (31% reduction in response time) on complex inference tasks. These findings empirically validate the critical role of multimodal explanation in enhancing BN understandability. To our knowledge, this is the first systematic, empirically grounded comparison of multimodal BN explanation paradigms that demonstrates measurable cognitive benefits.

Technology Category

Application Category

📝 Abstract
Bayesian Networks (BNs) are an important tool for assisting probabilistic reasoning, but despite being considered transparent models, people have trouble understanding them. Further, current User Interfaces (UIs) still do not clarify the reasoning of BNs. To address this problem, we have designed verbal and visual extensions to the standard BN UI, which can guide users through common inference patterns. We conducted a user study to compare our verbal, visual and combined UI extensions, and a baseline UI. Our main findings are: (1) users did better with all three types of extensions than with the baseline UI for questions about the impact of an observation, the paths that enable this impact, and the way in which an observation influences the impact of other observations; and (2) using verbal and visual modalities together is better than using either modality alone for some of these question types.
Problem

Research questions and friction points this paper is trying to address.

Addressing user comprehension difficulties with Bayesian Network inferences
Designing verbal and visual UI extensions to clarify BN reasoning
Comparing effectiveness of multimodal explanations versus single modality approaches
Innovation

Methods, ideas, or system contributions that make the work stand out.

Designed verbal and visual UI extensions
Guided users through Bayesian Network inferences
Combined verbal and visual modalities for clarity
🔎 Similar Papers
2024-07-30Conference on Empirical Methods in Natural Language ProcessingCitations: 0
E
Erik P. Nyberg
Dept of Data Science and AI, Faculty of Information Technology Monash University, Australia
S
Steven Mascaro
Dept of Data Science and AI, Faculty of Information Technology Monash University, Australia
Ingrid Zukerman
Ingrid Zukerman
Professor of Information Technology, Monash University
User ModelingLanguage TechnologyNatural Language GenerationXAIArtificial Intelligence
Michael Wybrow
Michael Wybrow
Dept of Human-Centred Computing, Faculty of Information Technology Monash University, Australia
D
Duc-Minh Vo
Dept of Data Science and AI, Faculty of Information Technology Monash University, Australia
A
Ann Nicholson
Faculty of Information Technology Monash University, Australia