Nash CoT: Multi-Path Inference with Preference Equilibrium

📅 2024-06-18
🏛️ Conference on Empirical Methods in Natural Language Processing
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Multi-path chain-of-thought (CoT) reasoning faces a fundamental trade-off among the number of reasoning paths, accuracy, and computational cost. Method: This paper proposes a Nash-equilibrium-inspired role-generic dual-mode competition mechanism: it orchestrates dynamic adversarial interaction between role-specialized prompting and generic model generation, enabling path reduction while preserving high diversity and accuracy. To our knowledge, this is the first work to integrate game-theoretic equilibrium principles into multi-path CoT, establishing a preference-driven competitive generation framework that mitigates reliance on redundant paths. Results: On Arabic reasoning, commonsense question answering, and symbolic reasoning benchmarks, our method achieves accuracy comparable to or exceeding standard multi-path CoT—using significantly fewer reasoning paths—thereby substantially improving inference efficiency and generalization robustness.

Technology Category

Application Category

📝 Abstract
Chain of thought (CoT) is a reasoning framework that can enhance the performance of large language models (LLMs) on complex inference tasks. In particular, among various studies related to CoT, multi-path inference stands out as a simple yet effective improvement. However, there is no optimal setting for the number of inference paths. Therefore, we have to increase the number of inference paths to obtain better results, which in turn increases the inference cost. To address this limitation, we can utilize question-related role templates to guide LLMs into relevant roles, thereby increasing the possibility of correct inferences for each path and further reducing dependence on the number of inference paths while improving reasoning accuracy. However, placing LLMs into specific roles may reduce their reasoning diversity and performance on a few tasks where role dependence is low. To alleviate the excessive immersion of the LLM into a specific role, we propose Nash CoT by constructing a competitive system on each path that balances the generation from role-specific LLMs’ and the general LLMs’ generation, thereby ensuring both effective role adoption and diversity in LLM generation further maintaining the performance of multi-path inference while reducing the requirement of the number of inference paths. We evaluate Nash CoT across various inference tasks, including Arabic Reasoning, Commonsense Question Answering, and Symbolic Inference, achieving results that are comparable to or better than those of multi-path CoT with the equal number of inference paths.
Problem

Research questions and friction points this paper is trying to address.

Chain-of-Thought (CoT)
Multi-path Inference
Large Language Models (LLMs)
Innovation

Methods, ideas, or system contributions that make the work stand out.

Nash Chain of Thought
Enhanced Multi-path Inference
Role-playing in LLMs
🔎 Similar Papers
No similar papers found.
Z
Ziqi Zhang
School of Engineering, Westlake University
Cunxiang Wang
Cunxiang Wang
Tsinghua University; ZhipuAI
Large Language ModelsLLM EvaluationLLM Post-training
Xiong Xiao
Xiong Xiao
Principal Applied scientist, Microsoft
Deep learning based signal processingspeech recognitionkeyword search.
Y
Yue Zhang
School of Engineering, Westlake University
D
Donglin Wang
School of Engineering, Westlake University