🤖 AI Summary
This study investigates how multimodal models—spanning language, speech, and vision—encode structured input interactions (e.g., syntactic dependencies, coarticulation, object boundaries) within their internal representations. To address this, we propose an interpretability framework grounded in the Shapley–Taylor Interaction Index (STII), enabling the first cross-modal validation of interaction patterns. Applying STII across diverse architectures, we find: (i) in masked and autoregressive language models, nonlinear interactions are significantly modulated by syntactic distance; (ii) in speech recognition models, interaction structures align with articulatory-physiological constraints; and (iii) in CNN- and Transformer-based image classifiers, higher-order interactions naturally concentrate at semantic boundaries. Collectively, these results demonstrate that STII robustly captures modality-specific structural priors, offering a unified, empirically verifiable metric for probing representational geometry across modalities.
📝 Abstract
Measuring nonlinear feature interaction is an established approach to understanding complex patterns of attribution in many models. In this paper, we use Shapley Taylor interaction indices (STII) to analyze the impact of underlying data structure on model representations in a variety of modalities, tasks, and architectures. Considering linguistic structure in masked and auto-regressive language models (MLMs and ALMs), we find that STII increases within idiomatic expressions and that MLMs scale STII with syntactic distance, relying more on syntax in their nonlinear structure than ALMs do. Our speech model findings reflect the phonetic principal that the openness of the oral cavity determines how much a phoneme varies based on its context. Finally, we study image classifiers and illustrate that feature interactions intuitively reflect object boundaries. Our wide range of results illustrates the benefits of interdisciplinary work and domain expertise in interpretability research.