MTQA:Matrix of Thought for Enhanced Reasoning in Complex Question Answering

📅 2025-09-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) exhibit limited reasoning capabilities in complex multi-hop question answering; existing chain-of-thought (CoT) and tree-of-thought (ToT) methods suffer from path singularity or structural redundancy; and retrieval-augmented generation (RAG) struggles to efficiently integrate multi-entity, multi-hop knowledge. Method: We propose the Matrix-of-Thought (MoT) framework, which establishes a two-dimensional reasoning structure: row-wise parallel exploration across diverse reasoning strategies, and column-wise cross-unit knowledge calibration and communication. MoT integrates retrieval augmentation, knowledge graph triple extraction, semantic mapping from text to knowledge units, and a graph- and text-guided factual correction mechanism to enhance knowledge fidelity. Contribution/Results: On four benchmark complex QA datasets, MoT achieves significant improvements in F1 and exact match (EM) scores over state-of-the-art methods, while requiring only 14.4% of the baseline inference time. The implementation is publicly available.

Technology Category

Application Category

📝 Abstract
Complex Question Answering (QA) is a fundamental and challenging task in NLP. While large language models (LLMs) exhibit impressive performance in QA, they suffer from significant performance degradation when facing complex and abstract QA tasks due to insufficient reasoning capabilities. Works such as Chain-of-Thought (CoT) and Tree-of-Thought (ToT) aim to enhance LLMs' reasoning abilities, but they face issues such as in-layer redundancy in tree structures and single paths in chain structures. Although some studies utilize Retrieval-Augmented Generation (RAG) methods to assist LLMs in reasoning, the challenge of effectively utilizing large amounts of information involving multiple entities and hops remains critical. To address this, we propose the Matrix of Thought (MoT), a novel and efficient LLM thought structure. MoT explores the problem in both horizontal and vertical dimensions through the "column-cell communication" mechanism, enabling LLMs to actively engage in multi-strategy and deep-level thinking, reducing redundancy within the column cells and enhancing reasoning capabilities. Furthermore, we develop a fact-correction mechanism by constructing knowledge units from retrieved knowledge graph triples and raw text to enhance the initial knowledge for LLM reasoning and correct erroneous answers. This leads to the development of an efficient and accurate QA framework (MTQA). Experimental results show that our framework outperforms state-of-the-art methods on four widely-used datasets in terms of F1 and EM scores, with reasoning time only 14.4% of the baseline methods, demonstrating both its efficiency and accuracy. The code for this framework is available at https://github.com/lyfiter/mtqa.
Problem

Research questions and friction points this paper is trying to address.

Enhancing reasoning in complex QA tasks
Reducing redundancy in LLM thought structures
Improving accuracy with fact-correction mechanisms
Innovation

Methods, ideas, or system contributions that make the work stand out.

Matrix of Thought enables multi-dimensional reasoning
Column-cell communication reduces redundancy in reasoning
Knowledge unit correction enhances answer accuracy
🔎 Similar Papers
No similar papers found.
Fengxiao Tang
Fengxiao Tang
tohoku university, central south university
deep learningwireless network
Yufeng Li
Yufeng Li
East China Normal University
Artificial Intelligence
Z
Zongzong Wu
School of Computer Science and Engineering, Central South University, Changsha, China
M
Ming Zhao
School of Computer Science and Engineering, Central South University, Changsha, China