🤖 AI Summary
Although Bayesian networks (BNs) possess theoretical interpretability, their inference processes remain opaque to users; existing interfaces inadequately convey observational impacts, propagation paths, and underlying causal mechanisms among variables.
Method: This study introduces three multimodal interface extensions—verbal explanations, visualizations, and their synergistic integration—evaluated systematically through UI design, dynamic BN visualization, natural language generation, and controlled cognitive experiments.
Contribution/Results: All extensions significantly outperformed conventional baseline interfaces. Notably, the verbal–visual synergistic interface substantially improved user comprehension accuracy (+27%) and reasoning efficiency (31% reduction in response time) on complex inference tasks. These findings empirically validate the critical role of multimodal explanation in enhancing BN understandability. To our knowledge, this is the first systematic, empirically grounded comparison of multimodal BN explanation paradigms that demonstrates measurable cognitive benefits.
📝 Abstract
Bayesian Networks (BNs) are an important tool for assisting probabilistic reasoning, but despite being considered transparent models, people have trouble understanding them. Further, current User Interfaces (UIs) still do not clarify the reasoning of BNs. To address this problem, we have designed verbal and visual extensions to the standard BN UI, which can guide users through common inference patterns.
We conducted a user study to compare our verbal, visual and combined UI extensions, and a baseline UI. Our main findings are: (1) users did better with all three types of extensions than with the baseline UI for questions about the impact of an observation, the paths that enable this impact, and the way in which an observation influences the impact of other observations; and (2) using verbal and visual modalities together is better than using either modality alone for some of these question types.