🤖 AI Summary
Existing implicit reasoning approaches in multimodal large language models struggle to effectively preserve critical visual information in intermediate latent representations. To address this limitation, this work proposes CrystaL, a single-stage dual-path framework that processes both pristine and corrupted images in parallel. By leveraging attention alignment and prediction distribution matching—without requiring additional annotations, human-provided supervision signals, or external modules—the framework guides visual latent variables to self-organize into structured, crystalline representations that explicitly surface task-relevant, fine-grained semantics. Extensive evaluation demonstrates that CrystaL significantly outperforms state-of-the-art models across multiple perception-intensive benchmarks, achieving substantial gains in fine-grained visual understanding while maintaining strong reasoning capabilities.
📝 Abstract
Multimodal Large Language Models (MLLMs) have achieved remarkable performance by integrating powerful language backbones with large-scale visual encoders. Among these, latent Chain-of-Thought (CoT) methods enable implicit reasoning in continuous hidden states, facilitating seamless vision-language integration and faster inference. However, existing heuristically predefined supervision signals in latent CoT provide limited guidance for preserving critical visual information in intermediate latent states. To address this limitation, we propose CrystaL (Crystallized Latent Reasoning), a single-stage framework with two paths to process intact and corrupted images, respectively. By explicitly aligning the attention patterns and prediction distributions across the two paths, CrystaL crystallizes latent representations into task-relevant visual semantics, without relying on auxiliary annotations or external modules. Extensive experiments on perception-intensive benchmarks demonstrate that CrystaL consistently outperforms state-of-the-art baselines, achieving substantial gains in fine-grained visual understanding while maintaining robust reasoning capabilities.