🤖 AI Summary
This work addresses the limitation of existing vision-language models in complex multi-step visual reasoning, where reliance on textual chains of thought often leads to the loss of critical visual information. The authors propose a “Decompose–Observe–Reason” framework that first dynamically decomposes a query into textual premises, then extracts sequential visual latent variables conditioned on these premises, and finally performs reasoning grounded in visual evidence to generate answers. The approach introduces three key innovations: spherical Gaussian latent variable modeling, a reinforcement learning–driven implicit reasoning mechanism, and a three-stage training pipeline, enabling efficient exploration in latent space. Evaluated across multiple vision-centric benchmarks, the method significantly outperforms strong baselines—including purely textual, interleaved multimodal chain-of-thought, and existing implicit reasoning models—while simultaneously enhancing both reasoning accuracy and step-wise interpretability.
📝 Abstract
Vision-Language Models often struggle with complex visual reasoning due to the visual information loss in textual CoT. Existing methods either add the cost of tool calls or rely on localized patch-based embeddings that are insufficient to extract semantics in multi-step reasoning. We propose \emph{"Decompose, Look, and Reason" (DLR)}, a reinforced latent reasoning framework that dynamically decomposes queries into textual premises, extracts premise-conditioned continuous visual latents, and deduces answers through grounded rationales. We introduce a three-stage training pipeline and propose a novel Spherical Gaussian Latent Policy to enable effective exploration in the latent space. Extensive experiments on vision-centric benchmarks show that DLR consistently outperforms strong baselines, including text-only, interleaved multimodal CoT, and latent reasoning methods, while providing superior stepwise interpretability.