🤖 AI Summary
This work proposes M-CBE, a novel framework that integrates a mixture-of-experts mechanism into Concept Bottleneck Models (CBMs) to overcome the limitations of conventional approaches that rely on a single linear or Boolean predictor. By enabling multiple experts to concurrently learn diverse functional forms—such as linear models and symbolic regression—M-CBE enhances both predictive accuracy and interpretability. Moreover, it supports the automatic discovery of interpretable rules based on user-specified operator vocabularies, thereby adapting to varied interpretability requirements. Experimental results demonstrate that M-CBE effectively balances performance and explainability across multiple tasks by adjusting the number of experts and their functional forms, significantly outperforming traditional CBM methods.
📝 Abstract
Concept Bottleneck Models (CBMs) promote interpretability by grounding predictions in human-understandable concepts. However, existing CBMs typically fix their task predictor to a single linear or Boolean expression, limiting both predictive accuracy and adaptability to diverse user needs. We propose Mixture of Concept Bottleneck Experts (M-CBEs), a framework that generalizes existing CBMs along two dimensions: the number of experts and the functional form of each expert, exposing an underexplored region of the design space. We investigate this region by instantiating two novel models: Linear M-CBE, which learns a finite set of linear expressions, and Symbolic M-CBE, which leverages symbolic regression to discover expert functions from data under user-specified operator vocabularies. Empirical evaluation demonstrates that varying the mixture size and functional form provides a robust framework for navigating the accuracy-interpretability trade-off, adapting to different user and task needs.