🤖 AI Summary
Existing multimodal large language models (MLLMs) exhibit limited performance on complex tasks requiring visual planning and mental imagery, primarily due to the absence of human-like “sketch-style” visual reasoning capabilities. To address this, we propose an implicit sketch framework that, for the first time, leverages internal visual representations of MLLMs for generative visual thinking. Our method introduces a context-aware autoregressive visual head to produce interpretable visual latent variables, and couples it with a pretrained sketch decoder to map these latents into semantically coherent sketch images—enabling joint modeling of textual reasoning and visual thought. Crucially, the framework requires no architectural modifications to the base MLLM and supports plug-and-play deployment across diverse models. Evaluated on the MazePlanning benchmark, our approach matches or surpasses strong baselines in reasoning performance and demonstrates exceptional generalization across mainstream MLLMs, including Gemma3 and Qwen2.5-VL.
📝 Abstract
While Multimodal Large Language Models (MLLMs) excel at visual understanding, they often struggle in complex scenarios that require visual planning and imagination. Inspired by how humans use sketching as a form of visual thinking to develop and communicate ideas, we introduce Latent Sketchpad, a framework that equips MLLMs with an internal visual scratchpad. The internal visual representations of MLLMs have traditionally been confined to perceptual understanding. We repurpose them to support generative visual thought without compromising reasoning ability. Building on frontier MLLMs, our approach integrates visual generation directly into their native autoregressive reasoning process. It allows the model to interleave textual reasoning with the generation of visual latents. These latents guide the internal thought process and can be translated into sketch images for interpretability. To realize this, we introduce two components: a Context-Aware Vision Head autoregressively produces visual representations, and a pretrained Sketch Decoder renders these into human-interpretable images. We evaluate the framework on our new dataset MazePlanning. Experiments across various MLLMs show that Latent Sketchpad delivers comparable or even superior reasoning performance to their backbone. It further generalizes across distinct frontier MLLMs, including Gemma3 and Qwen2.5-VL. By extending model's textual reasoning to visual thinking, our framework opens new opportunities for richer human-computer interaction and broader applications. More details and resources are available on our project page: https://latent-sketchpad.github.io/.