🤖 AI Summary
Vision-language models (VLMs) exhibit poor compositional generalization and object-attribute binding, failing to generalize to novel object–attribute combinations. This work pioneers the application of mechanistic interpretability methods to the CLIP visual encoder, identifying a key failure mode: representational confusion in MLP layers caused by multi-feature overactivation—where individual neurons jointly encode multiple semantic features, leading to erroneous binding of objects and attributes. Through systematic analysis of neuron activation patterns, we establish feature overactivation as a fundamental bottleneck limiting compositional reasoning. Our findings uncover the neural basis underlying VLMs’ compositional failures and provide open-source, reproducible code and empirical results. This enables interpretable, intervention-ready pathways for improving structural robustness in VLMs.
📝 Abstract
Vision-Language Models (VLMs) have shown remarkable performance in integrating visual and textual information for tasks such as image captioning and visual question answering. However, these models struggle with compositional generalization and object binding, which limit their ability to handle novel combinations of objects and their attributes. Our work explores the root causes of these failures using mechanistic interpretability techniques. We show evidence that individual neurons in the MLP layers of CLIP's vision encoder represent multiple features, and this "superposition" directly hinders its compositional feature representation which consequently affects compositional reasoning and object binding capabilities. We hope this study will serve as an initial step toward uncovering the mechanistic roots of compositional failures in VLMs. The code and supporting results can be found https://github.com/Mystic-Slice/Do-VLMs-Have-Bad-Eyes .