Probing Visual Concepts in Lightweight Vision-Language Models for Automated Driving

📅 2026-03-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the frequent failure of lightweight vision-language models in autonomous driving due to their inability to reliably process critical visual concepts, a problem whose root causes remain unclear. By constructing counterfactual image sets that differ only in specific visual attributes—such as object presence, orientation, and distance—and analyzing intermediate activations of four state-of-the-art models using linear probes, the authors systematically identify two distinct failure modes: perceptual and cognitive failures. Their findings reveal that concepts like object presence are explicitly encoded in a linearly separable manner, whereas spatial relationships such as orientation are only implicitly preserved. Moreover, increasing target distance significantly degrades the linear separability of these concepts. This work uncovers key bottlenecks in visual information propagation within such models and provides a theoretical foundation for targeted architectural improvements.

Technology Category

Application Category

📝 Abstract
The use of Vision-Language Models (VLMs) in automated driving applications is becoming increasingly common, with the aim of leveraging their reasoning and generalisation capabilities to handle long tail scenarios. However, these models often fail on simple visual questions that are highly relevant to automated driving, and the reasons behind these failures remain poorly understood. In this work, we examine the intermediate activations of VLMs and assess the extent to which specific visual concepts are linearly encoded, with the goal of identifying bottlenecks in the flow of visual information. Specifically, we create counterfactual image sets that differ only in a targeted visual concept and then train linear probes to distinguish between them using the activations of four state-of-the-art (SOTA) VLMs. Our results show that concepts such as the presence of an object or agent in a scene are explicitly and linearly encoded, whereas other spatial visual concepts, such as the orientation of an object or agent, are only implicitly encoded by the spatial structure retained by the vision encoder. In parallel, we observe that in certain cases, even when a concept is linearly encoded in the model's activations, the model still fails to answer correctly. This leads us to identify two failure modes. The first is perceptual failure, where the visual information required to answer a question is not linearly encoded in the model's activations. The second is cognitive failure, where the visual information is present but the model fails to align it correctly with language semantics. Finally, we show that increasing the distance of the object in question quickly degrades the linear separability of the corresponding visual concept. Overall, our findings improve our understanding of failure cases in VLMs on simple visual tasks that are highly relevant to automated driving.
Problem

Research questions and friction points this paper is trying to address.

Vision-Language Models
automated driving
visual concepts
model failures
linear encoding
Innovation

Methods, ideas, or system contributions that make the work stand out.

vision-language models
linear probing
counterfactual analysis
perceptual failure
cognitive failure
🔎 Similar Papers
No similar papers found.
N
Nikos Theodoridis
Department of Electronic and Computer Engineering, University of Limerick
R
Reenu Mohandas
Department of Electronic and Computer Engineering, University of Limerick
Ganesh Sistu
Ganesh Sistu
Principal Artificial Intelligence Architect, Valeo Ireland
Autonomous DrivingMachine LearningComputer VisionDeep Learning
A
Anthony Scanlan
Department of Electronic and Computer Engineering, University of Limerick
Ciarán Eising
Ciarán Eising
University of Limerick
computer vision
Tim Brophy
Tim Brophy
University of Galway