Relative Drawing Identification Complexity is Invariant to Modality in Vision-Language Models

📅 2025-05-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper investigates whether the teaching complexity of cross-modal concept recognition in vision-language models exhibits modality invariance. Within the machine teaching framework, we quantify and compare learning difficulty for object recognition across two distinct modalities: bitmap images from Quick, Draw! and vector-based TikZ coordinate representations. Our findings are threefold: (1) Teaching complexity rankings are highly consistent across modalities (Spearman ρ > 0.85), and this consistency remains robust even after controlling for human prior knowledge; (2) Concept-level teaching complexity is fundamentally modality-agnostic, indicating that “concept simplicity” is an intrinsic, cross-modal property; (3) These results challenge prevailing modality-specific representation assumptions, offering novel theoretical foundations for cross-modal alignment and interpretable AI.

Technology Category

Application Category

📝 Abstract
Large language models have become multimodal, and many of them are said to integrate their modalities using common representations. If this were true, a drawing of a car as an image, for instance, should map to the similar area in the latent space as a textual description of the strokes that conform the drawing. To explore this in a black-box access regime to these models, we propose the use of machine teaching, a theory that studies the minimal set of examples a teacher needs to choose so that the learner captures the concept. In this paper we evaluate the complexity of teaching visual-language models a subset of objects in the Quick, Draw! dataset using two presentations: raw images as bitmaps and trace coordinates in TikZ format. The results indicate that image-based representations generally require fewer segments and achieve higher accuracy than coordinate-based representations. But, surprisingly, the teaching size usually ranks concepts similarly across both modalities, even when controlling for (a human proxy of) concept priors, suggesting that the simplicity of concepts may be an inherent property that transcends modality representations.
Problem

Research questions and friction points this paper is trying to address.

Evaluate teaching complexity for visual-language models using different modalities
Compare image-based and coordinate-based representations in teaching object concepts
Assess if concept simplicity is invariant across modality representations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses machine teaching theory
Compares bitmap and TikZ formats
Measures teaching size complexity
🔎 Similar Papers
No similar papers found.
D
Diogo Freitas
Interactive Technologies Institute and NOV A LINCS, Faculty of Exact Sciences and Engineering, University of Madeira, Portugal
B
Brigt Haavardstun
Department of Informatics, University of Bergen, Norway
C
César Ferri
Valencian Research Institute for Artificial Intelligence, Universitat Politècnica de València, Spain
Darío Garigliotti
Darío Garigliotti
Department of Informatics, University of Bergen, Norway
J
J. A. Telle
Department of Informatics, University of Bergen, Norway
José Hernández-Orallo
José Hernández-Orallo
University of Cambridge, VRAIN-UPV
Artificial IntelligenceData ScienceIntelligenceAI EvaluationAI Safety