🤖 AI Summary
This work addresses the limited interpretability and lack of verifiable trust mechanisms in current multimodal large language models (MLLMs) when operating in zero-shot settings as black boxes. The authors propose an explicit logical reasoning channel that runs in parallel with the MLLM’s implicit inference, integrating large language models, vision foundation models, and probabilistic reasoning to support fact-based, counterfactual, and relational reasoning grounded in visual evidence. A novel consistency rate (CR) metric—requiring no ground-truth labels—is introduced to enable cross-channel validation and model selection. Experiments across two task types (MC-VQA and HC-REC) and three benchmarks demonstrate that the proposed approach significantly enhances the zero-shot performance, reliability, and interpretability of 11 mainstream MLLMs.
📝 Abstract
Frontier Multimodal Large Language Models (MLLMs) exhibit remarkable capabilities in Visual-Language Comprehension (VLC) tasks. However, they are often deployed as zero-shot solution to new tasks in a black-box manner. Validating and understanding the behavior of these models become important for application to new task. We propose an Explicit Logic Channel, in parallel with the black-box model channel, to perform explicit logical reasoning for model validation, selection and enhancement. The frontier MLLM, encapsulating latent vision-language knowledge, can be considered as an Implicit Logic Channel. The proposed Explicit Logic Channel, mimicking human logical reasoning, incorporates a LLM, a VFM, and logical reasoning with probabilistic inference for factual, counterfactual, and relational reasoning over the explicit visual evidence. A Consistency Rate (CR) is proposed for cross-channel validation and model selection, even without ground-truth annotations. Additionally, cross-channel integration further improves performance in zero-shot tasks over MLLMs, grounded with explicit visual evidence to enhance trustworthiness. Comprehensive experiments conducted for two representative VLC tasks, i.e., MC-VQA and HC-REC, on three challenging benchmarks, with 11 recent open-source MLLMs from 4 frontier families. Our systematic evaluations demonstrate the effectiveness of proposed ELC and CR for model validation, selection and improvement on MLLMs with enhanced explainability and trustworthiness.