Chain of Questions: Guiding Multimodal Curiosity in Language Models

📅 2025-08-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current multimodal large language models lack the capability to autonomously select and integrate heterogeneous sensory modalities—such as visual, auditory, and spatial inputs—thereby limiting complex-scene reasoning. To address this, we propose the Chain of Questions (CoQ) framework, the first to introduce a curiosity-driven, dynamic questioning mechanism into multimodal reasoning: the model proactively generates a chain of interdependent questions and accordingly adapts its selection and fusion of multimodal information. This enhances both interpretability and task-specific adaptability of reasoning. Evaluated on a novel multimodal benchmark comprising WebGPT, ScienceQA, AVSD, and ScanQA, CoQ achieves substantial accuracy improvements, reliably identifies critical sensory cues, and renders the reasoning process more transparent and cognitively aligned with human inference patterns.

Technology Category

Application Category

📝 Abstract
Reasoning capabilities in large language models (LLMs) have substantially advanced through methods such as chain-of-thought and explicit step-by-step explanations. However, these improvements have not yet fully transitioned to multimodal contexts, where models must proactively decide which sensory modalities such as vision, audio, or spatial perception to engage when interacting with complex real-world environments. In this paper, we introduce the Chain of Questions (CoQ) framework, a curiosity-driven reasoning approach that encourages multimodal language models to dynamically generate targeted questions regarding their surroundings. These generated questions guide the model to selectively activate relevant modalities, thereby gathering critical information necessary for accurate reasoning and response generation. We evaluate our framework on a novel multimodal benchmark dataset, assembled by integrating WebGPT, ScienceQA, AVSD, and ScanQA datasets. Experimental results demonstrate that our CoQ method improves a foundation model's ability to effectively identify and integrate pertinent sensory information. This leads to improved accuracy, interpretability, and alignment of the reasoning process with diverse multimodal tasks.
Problem

Research questions and friction points this paper is trying to address.

Enhancing multimodal reasoning in language models
Guiding models to select relevant sensory modalities
Improving accuracy in diverse multimodal tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

CoQ framework guides multimodal curiosity dynamically
Generates targeted questions to activate relevant modalities
Improves accuracy and interpretability in multimodal tasks
🔎 Similar Papers
No similar papers found.