🤖 AI Summary
This work addresses the challenge faced by multimodal large language models (MLLMs) in performing visual abductive reasoning—specifically, inferring the most plausible explanation from incomplete observations. To this end, the authors propose the first dual-module framework that integrates linguistic abduction with visual imagination, comprising a REASONER and an IMAGINER. The REASONER leverages a blind LLM to generate candidate hypotheses, while the IMAGINER employs a text-to-image diffusion model to synthesize corresponding visual scenes. A cross-modal causal alignment mechanism filters for explanations that exhibit both contextual plausibility and causal consistency. The entire system is jointly optimized in an end-to-end manner, significantly enhancing the MLLM’s ability to reason with contextual awareness and causal coherence. Evaluated on standard visual abductive reasoning benchmarks, the proposed method outperforms both conventional approaches and state-of-the-art MLLMs.
📝 Abstract
Visual abductive reasoning (VAR) is a challenging task that requires AI systems to infer the most likely explanation for incomplete visual observations. While recent MLLMs develop strong general-purpose multimodal reasoning capabilities, they fall short in abductive inference, as compared to human beings. To bridge this gap, we draw inspiration from the interplay between verbal and pictorial abduction in human cognition, and propose to strengthen abduction of MLLMs by mimicking such dual-mode behavior. Concretely, we introduce AbductiveMLLM comprising of two synergistic components: REASONER and IMAGINER. The REASONER operates in the verbal domain. It first explores a broad space of possible explanations using a blind LLM and then prunes visually incongruent hypotheses based on cross-modal causal alignment. The remaining hypotheses are introduced into the MLLM as targeted priors, steering its reasoning toward causally coherent explanations. The IMAGINER, on the other hand, further guides MLLMs by emulating human-like pictorial thinking. It conditions a text-to-image diffusion model on both the input video and the REASONER's output embeddings to"imagine"plausible visual scenes that correspond to verbal explanation, thereby enriching MLLMs'contextual grounding. The two components are trained jointly in an end-to-end manner. Experiments on standard VAR benchmarks show that AbductiveMLLM achieves state-of-the-art performance, consistently outperforming traditional solutions and advanced MLLMs.