Graph Integrated Multimodal Concept Bottleneck Model

📅 2025-10-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing concept bottleneck models (CBMs) are predominantly unimodal and neglect structured dependencies among concepts, limiting their explainable reasoning capability in high-stakes scenarios. To address this, we propose the Multimodal Graph-enhanced CBM (MG-CBM), which innovatively introduces dual graph structures—answer-concept and answer-question graphs—and employs Graph Transformers to model hierarchical concept relationships. MG-CBM further integrates a Mixture-of-Experts (MoE) architecture with dynamic expert selection to enable adaptive task routing. This work is the first to unify structured graph modeling, multimodal concept representation, and sparsely gated MoE within the CBM framework. Evaluated on multiple benchmark datasets, MG-CBM achieves significant improvements over state-of-the-art CBMs: average prediction accuracy increases by 3.2%, and complex concept reasoning performance improves by 19.7%, while preserving strong interpretability and generalization.

Technology Category

Application Category

📝 Abstract
With growing demand for interpretability in deep learning, especially in high stakes domains, Concept Bottleneck Models (CBMs) address this by inserting human understandable concepts into the prediction pipeline, but they are generally single modal and ignore structured concept relationships. To overcome these limitations, we present MoE-SGT, a reasoning driven framework that augments CBMs with a structure injecting Graph Transformer and a Mixture of Experts (MoE) module. We construct answer-concept and answer-question graphs for multimodal inputs to explicitly model the structured relationships among concepts. Subsequently, we integrate Graph Transformer to capture multi level dependencies, addressing the limitations of traditional Concept Bottleneck Models in modeling concept interactions. However, it still encounters bottlenecks in adapting to complex concept patterns. Therefore, we replace the feed forward layers with a Mixture of Experts (MoE) module, enabling the model to have greater capacity in learning diverse concept relationships while dynamically allocating reasoning tasks to different sub experts, thereby significantly enhancing the model's adaptability to complex concept reasoning. MoE-SGT achieves higher accuracy than other concept bottleneck networks on multiple datasets by modeling structured relationships among concepts and utilizing a dynamic expert selection mechanism.
Problem

Research questions and friction points this paper is trying to address.

Enhancing interpretability by modeling structured concept relationships in deep learning
Overcoming single-modal limitations through multimodal graph-based concept integration
Improving complex concept reasoning via dynamic expert mixture and graph transformers
Innovation

Methods, ideas, or system contributions that make the work stand out.

Graph Transformer injects structured concept relationships
Mixture of Experts module replaces feed forward layers
Dynamic expert selection enhances complex concept reasoning
🔎 Similar Papers
No similar papers found.
Jiakai Lin
Jiakai Lin
University of Georgia
Computer Vision
J
Jinchang Zhang
Intelligent Vision and Sensing (IVS) Lab at SUNY Binghamton, USA
Guoyu Lu
Guoyu Lu
SUNY Binghamton
RoboticsComputer VisionMachine Learning